malllabiisc / EmbedKGQA

ACL 2020: Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
Apache License 2.0
412 stars 95 forks source link

kge target #119

Closed ticoAg closed 1 year ago

ticoAg commented 2 years ago

At first, thanks for your nice work. I follow your job to retrain the kg for MetaQA firstly and get the result of 0.365 for hits_at_1,but i'm not sure whether the result is useful for the Roberta model to learn knowledge from KG. Could give me some general params to assessment the kg embedding model? Like 0.4 or more for hits_at_1/hits_at_10 is enough to learn something from model to use in QA. On the other hand, I also train the fbwq and get a result of about 0.27 for top 1,if it's enough? Thanks for your answer~

apoorvumang commented 2 years ago

Hi, thanks for your interest!

Could give me some general params to assessment the kg embedding model? Like 0.4 or more for hits_at_1/hits_at_10 is enough to learn something from model to use in QA.

I am assuming the question is this: What is a good metric to judge whether KG embedding has been trained well, so that one can further proceed to use those embeddings in QA?

My opinion: If the KG is known to be reasonably complete, then look at the train MRR and ensure it is > 0.9 . Test MRR is good for generalizability, but if train MRR is bad then EmbedKGQA won't be able to answer questions with known facts.

On the other hand, I also train the fbwq and get a result of about 0.27 for top 1,if it's enough?

could you please elaborate more on this