Hi~ Sorry to bother you! Recently, I noticed this wonderful work.
After reading the code, I have a question. I notice that you use all training triples from link prediction datasets to train GNN-QE. For example, the number of training triples of FB15K-237 on link prediction is 544230 (if we add the reverse relations). The number of training 1p queries of FB15K-237 on complex query answering is only 273710 (with the reverse relations).
Although you do not directly use all training triples as 1p queries to train the model, GNN-QE does obtain information from these triples besides the 1p queries.
Hence, whether we can use all training triples from link prediction datasets for the task (i.e., complex query answering)? Furthermore, whether we can directly use all training triples from link prediction datasets as 1p queries to train the complex query answering model?
Thanks in advance. I will really appreciate your reply.
I got it. The number of training 1p queries equals the number of the training triples from link prediction datasets, this is due to that each 1p training query has several answers.
Hi~ Sorry to bother you! Recently, I noticed this wonderful work. After reading the code, I have a question. I notice that you use all training triples from link prediction datasets to train GNN-QE. For example, the number of training triples of FB15K-237 on link prediction is 544230 (if we add the reverse relations). The number of training 1p queries of FB15K-237 on complex query answering is only 273710 (with the reverse relations). Although you do not directly use all training triples as 1p queries to train the model, GNN-QE does obtain information from these triples besides the 1p queries. Hence, whether we can use all training triples from link prediction datasets for the task (i.e., complex query answering)? Furthermore, whether we can directly use all training triples from link prediction datasets as 1p queries to train the complex query answering model? Thanks in advance. I will really appreciate your reply.