chenchens190009 / CSProm-KG

32 stars 0 forks source link

CSProm-KG

Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting

Overview of KG-S2S ...

This repository includes the source code of the paper accepted by ACL 2023 Findings.

"Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting".

Dependencies

Dataset:

Pretrained Checkpoint:

To enable a quick evaluation, we upload the trained model. Download the checkpoint folders to ./checkpoint/, and run the evaluation commandline for corresponding dataset.

The results are:

Dataset MRR H@1 H@3 H@10
WN18RR 0.572660 52.06% 59.00% 67.79% % 3618847
FB15k-237 0.357701 26.90% 39.07% 53.55% % 3589872 3593025
Wikidata5m 0.379789 34.32% 39.91% 44.57% % 3592828 3973053
ICEWS14 0.627971 54.74% 67.73% 77.30% % 3597939 3599830
ICEWS05-15 0.626890 54.27% 67.84% 78.22% % 3589499 3589499

Training and testing:

Citation

If you used our work or found it helpful, please use the following citation:

@inproceedings{chen-etal-2023-dipping,
    title = "Dipping {PLM}s Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting",
    author = "Chen, Chen  and
      Wang, Yufei  and
      Sun, Aixin  and
      Li, Bing  and
      Lam, Kwok-Yan",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-acl.729",
    pages = "11489--11503",
    abstract = "Knowledge Graph Completion (KGC) often requires both KG structural and textual information to be effective. Pre-trained Language Models (PLMs) have been used to learn the textual information, usually under the fine-tune paradigm for the KGC task. However, the fine-tuned PLMs often overwhelmingly focus on the textual information and overlook structural knowledge. To tackle this issue, this paper proposes CSProm-KG (Conditional Soft Prompts for KGC) which maintains a balance between structural information and textual knowledge. CSProm-KG only tunes the parameters of Conditional Soft Prompts that are generated by the entities and relations representations. We verify the effectiveness of CSProm-KG on three popular static KGC benchmarks WN18RR, FB15K-237 and Wikidata5M, and two temporal KGC benchmarks ICEWS14 and ICEWS05-15. CSProm-KG outperforms competitive baseline models and sets new state-of-the-art on these benchmarks. We conduct further analysis to show (i) the effectiveness of our proposed components, (ii) the efficiency of CSProm-KG, and (iii) the flexibility of CSProm-KG.",
}