File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: ZeroEA: A Zero-Training Entity Alignment Framework via Pre-Trained Language Model

TitleZeroEA: A Zero-Training Entity Alignment Framework via Pre-Trained Language Model
Authors
Issue Date30-May-2024
PublisherVLDB Endowment
Citation
Proceedings of the VLDB Endowment, 2024, v. 17, n. 7, p. 1765-1774 How to Cite?
Abstract

Entity alignment (EA), a crucial task in knowledge graph (KG) research, aims to identify equivalent entities across different KGs to support downstream tasks like KG integration, text-to-SQL, and question-answering systems. Given rich semantic information within KGs, pre-trained language models (PLMs) have shown promise in EA tasks due to their exceptional context-aware encoding capabilities. However, the current solutions based on PLMs encounter obstacles such as the need for extensive training, expensive data annotation, and inadequate incorporation of structural information. In this study, we introduce a novel zero-training EA framework, ZeroEA, which effectively captures both semantic and structural information for PLMs. To be specific, Graph2Prompt module serves as the bridge between graph structure and plain text by converting KG topology into textual context suitable for PLM input. Additionally, in order to provide PLMs with concise and clear input text of reasonable length, we design a motif-based neighborhood filter to eliminate noisy neighbors. The comprehensive experiments and analyses on 5 benchmark datasets demonstrate the effectiveness of ZeroEA, outperforming all leading competitors and achieving state-of-the-art performance in entity alignment. Notably, our study highlights the considerable potential of EA technique in improving the performance of downstream tasks, thereby benefitting the broader research field.


Persistent Identifierhttp://hdl.handle.net/10722/348508
ISSN
2023 Impact Factor: 2.6
2023 SCImago Journal Rankings: 2.666

 

DC FieldValueLanguage
dc.contributor.authorHuo, Nan-
dc.contributor.authorCheng, Reynold-
dc.contributor.authorKao, Ben-
dc.contributor.authorNing, Wentao-
dc.contributor.authorHaldar, Nur Al Hasan-
dc.contributor.authorLi, Xiaodong-
dc.contributor.authorLi, Jinyang-
dc.contributor.authorNajafi, Mohammad Matin-
dc.contributor.authorLi, Tian-
dc.contributor.authorQu, Ge-
dc.date.accessioned2024-10-10T00:31:11Z-
dc.date.available2024-10-10T00:31:11Z-
dc.date.issued2024-05-30-
dc.identifier.citationProceedings of the VLDB Endowment, 2024, v. 17, n. 7, p. 1765-1774-
dc.identifier.issn2150-8097-
dc.identifier.urihttp://hdl.handle.net/10722/348508-
dc.description.abstract<p>Entity alignment (EA), a crucial task in knowledge graph (KG) research, aims to identify equivalent entities across different KGs to support downstream tasks like KG integration, text-to-SQL, and question-answering systems. Given rich semantic information within KGs, pre-trained language models (PLMs) have shown promise in EA tasks due to their exceptional context-aware encoding capabilities. However, the current solutions based on PLMs encounter obstacles such as the need for extensive training, expensive data annotation, and inadequate incorporation of structural information. In this study, we introduce a novel zero-training EA framework, ZeroEA, which effectively captures both semantic and structural information for PLMs. To be specific, Graph2Prompt module serves as the bridge between graph structure and plain text by converting KG topology into textual context suitable for PLM input. Additionally, in order to provide PLMs with concise and clear input text of reasonable length, we design a motif-based neighborhood filter to eliminate noisy neighbors. The comprehensive experiments and analyses on 5 benchmark datasets demonstrate the effectiveness of ZeroEA, outperforming all leading competitors and achieving state-of-the-art performance in entity alignment. Notably, our study highlights the considerable potential of EA technique in improving the performance of downstream tasks, thereby benefitting the broader research field.<br></p>-
dc.languageeng-
dc.publisherVLDB Endowment-
dc.relation.ispartofProceedings of the VLDB Endowment-
dc.titleZeroEA: A Zero-Training Entity Alignment Framework via Pre-Trained Language Model-
dc.typeArticle-
dc.identifier.doi10.14778/3654621.3654640-
dc.identifier.scopuseid_2-s2.0-85195654054-
dc.identifier.volume17-
dc.identifier.issue7-
dc.identifier.spage1765-
dc.identifier.epage1774-
dc.identifier.issnl2150-8097-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats