uma-pi1 / kge

LibKGE - A knowledge graph embedding library for reproducible research
MIT License
765 stars 124 forks source link

sota configs with different version pytorch #238

Open xlk369293141 opened 2 years ago

xlk369293141 commented 2 years ago

Thanks for you sharing. But I have some problem in recurring process. I use your setup.py besides the pytorch version =1.9.0 + cuda 11.1 because of our limited GPU device support. I try the ComplEx and ConvE configs you prrovided with three random seed, and report the best mean_reciprocal_rank_filtered_with_test. ComplEx FB15k-237 : 27.0 (34.8) WNRR: 44.8 (47.5) ConvE FB15k-237: 30.7 (33.9) WNRR: 42.5 (44.2) The results reported by you are in brackets. What caused this difference?

And another question, How can I get the all tail entity indexes satisfy a particalar query (sp_) from Dataset class?

AdrianKs commented 2 years ago

Hi, can you run one of the experiments without any random seed as a sanity check. I want to make sure that there are no issues with seeding that influence final quality.

Regarding you second question: To find out the most probable tail entities you need to score your sp-query against all objects and sort the resulting scores in a descending order. You can get all scores with the function self.model.score_sp. The most probable objects should be the ones being ranked the highest.

https://github.com/uma-pi1/kge/blob/782954e849e87d7afbbdde8fa9f1072921b6357a/kge/model/kge_model.py#L682

In case you are only looking for tail entities which answer the query with the triples given in the train set you can use the index self.dataset.index("train_sp_to_o"). We have indexes like this for all splits (also valid and test)

rgemulla commented 2 years ago

As a data point: I reran using the versions listed in setup.py and Pytorch 1.10 for ComplEx FB15k-237. In the paper, we reported 34.8. The rerun produced 35.1 (without seed) and 35.2 (with --random_seed.default 1). I cannot reproduce this issue.

rgemulla commented 2 years ago

@xlk369293141 Are you still experiencing this problem?