malllabiisc / EmbedKGQA

ACL 2020: Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
Apache License 2.0
412 stars 95 forks source link

Are 2-hop and 3-hop triple needed to be trained during KGE phrase? #114

Closed mug2mag closed 2 years ago

mug2mag commented 2 years ago

@apoorvumang

hi, I have a question about the knowledge embedding training: Are 2-hop and 3-hop triples needed to be trained during KGE phrase? That is triples in fbwq_full datasert likes (m.0rh6k location.location.people_born_here m.02qd3hj) (3-hop triple) and (m.010bss7d film.genre m.03k9fj) (2-hop triple) and (m.0gkt5x0 country m.06qd3) (1-hop triple) are put together to get knowledge embedding by KGE?

Any reply would be appreciated.

apoorvumang commented 2 years ago

I'm not sure I understand what you mean by 2/3 hop triples. location.location.people_born_here is a single relation in Freebase, not a 3-hop relation path. The reason it looks like 3 relations concatenated together is due to the schema used by freebase , but in reality it is a single relation.

mug2mag commented 2 years ago

@apoorvumang Thanks for your reply.

location.location.people_born_here is a single relation in Freebase, not a 3-hop relation path

So how do you define a 3-hop relation in fbwq_full dataset? And did you regard (m.0rh6k location.location.people_born_here m.02qd3hj) as training data during KGE training?

Furthermore, as the published MetaQA dataset shows that there are 1-hop, 2-hop, 3-hop train dataset, dev dataset and test dataset respectively. Did you gather all training data to train one model and test the performance by the one model for these three hops test questions separately?

apoorvumang commented 2 years ago

So how do you define a 3-hop relation in fbwq_full dataset?

We don't define relations to be 1,2 or 3-hop, we define questions as 1,2, or 3-hops, depending on how many hops of reasoning are required to answer them. Relations are considered atomic.

And did you regard (m.0rh6k location.location.people_born_here m.02qd3hj) as training data during KGE training?

Not sure about this particular triple, but yes similar looking relations would have been used in KGE training

For MetaQA, we trained KG embeddings once. Then for each QA dataset (1,2 and 3-hop) we train and evaluate the QA model separately

mug2mag commented 2 years ago

For MetaQA, we trained KG embeddings once. Then for each QA dataset (1,2 and 3-hop) we train and evaluate the QA model separately

So in EmbedKGQA stage, you have three trained models?

apoorvumang commented 2 years ago

Yes 3 different models for MetaQA

mug2mag commented 2 years ago

Thanks very much for your reply and patience!!