David-Li0406 / Contextulization-Distillation

Code and data for EACL 2024 paper "Contextualization Distillation from Large Language Models for Knowledge Graph Completion"
17 stars 0 forks source link

There were some problems replicating the results #2

Open Air2024 opened 3 months ago

Air2024 commented 3 months ago
Hello, first of all, thank you for your contribution, but I have encountered several problems in the process of replicating the results, and I would like to ask for your help.
1. I used KG-S2S to replicate the data set you provided with knowledge_context.txt, and the result was lower than that given in the paper(I'm using the WN18RR dataset).
2. When I used knowledge_context.txt processed by myself or even changed the random-processed knowledge_context.txt, the experimental result even kept the loss at every moment unchanged, but I made sure that -contextualization was set to True. In other words, knowledge_context.txt did not have any effect in the experiment.
Below is a screenshot of my results,thank you for helping me out.

797de6351a4cfd7837c2d50473d24f8

David-Li0406 commented 3 months ago

Hey,

I need to apologize that I found the current implementation of CD in KG-S2S is something wrong. The contextualization loss is totally missing and that's why you found the losses w/ and w/o -contextualization to be the same.

I will upload a new version in several days and will let you know.

Air2024 commented 3 months ago

Thank you very much for your answer, I will be here waiting for you

Air2024 commented 3 months ago

Sorry to bother you, is the new version ready? I may want to use your code and quote your article as soon as possible.

David-Li0406 commented 3 months ago

@Air2024 Hey, I am sorry for delaying. I have updated the KG-S2S implementation. Now you are supposed to get a 0.5~1 higher score in WN18RR than the original baseline. I also attached my reproduction result here for reference:

8552ef219c3d214de04ef871758fc15

Consider quoting our work if you find it helpful!