thunlp / OpenKE

An Open-Source Package for Knowledge Embedding (KE)
3.76k stars 984 forks source link

Testing Result #93

Closed yuhaozhang97 closed 5 years ago

yuhaozhang97 commented 5 years ago

Hi Can you explain to me what does l(raw), r(raw) means from con.test()?

And there's a indentation error in Config.py line 61

yuhaozhang97 commented 5 years ago

Hi Can you explain to me what does l(raw), r(raw) means from con.test()?

THUCSTHanxu13 commented 5 years ago

I have fix the indentation error.

THUCSTHanxu13 commented 5 years ago

We have give some details of test settings as follows:

Link prediction aims to predict the missing h or t for a relation fact triple (h, r, t). In this task, for each position of missing entity, the system is asked to rank a set of candidate entities from the knowledge graph, instead of only giving one best result. For each test triple (h, r, t), we replace the head/tail entity by all entities in the knowledge graph, and rank these entities in descending order of similarity scores calculated by score function fr. we use two measures as our evaluation metric:

MR : mean rank of correct entities; MRR: the average of the reciprocal ranks of correct entities; Hit@N : proportion of correct entities in top-N ranked entities.

yuhaozhang97 commented 5 years ago

Thanks for your replying but when I was trying to run the code, I actually got a bunch of zero for Hit@N metrics (for model HolE, DistMult, RESCAL). Is it a parameter issue?

THUCSTHanxu13 commented 5 years ago

Given (h, r, t), we require models to predict (?, r, t) and (h, r, ?), whose results are l(raw/filter) and r(raw/filter) respectively. All these settings about raw/filter, you can find related information from the paper "Translating Embeddings for Modeling Multi-relational Data"

yuhaozhang97 commented 5 years ago

Thanks for your replying but when I was trying to run the code, I actually got a bunch of zero for Hit@N metrics (for model HolE, DistMult, RESCAL). Is it a parameter issue?