LARS-research / AutoSF

Y. Zhang, Q. Yao, J. Kwok. Bilinear Scoring Function Search for Knowledge Graph Learning. TPAMI 2022
68 stars 12 forks source link

How to explicitly re-evaluate the model using the best scoring function #3

Closed jpainam closed 4 years ago

jpainam commented 4 years ago

@yzhangee @quanmingyao @skeletondyh Hi, I'm having a problemp similar to #2

  1. After training with bash run.sh, I get the following results saved in WN18_perf.txt file as

    1 2 0 3         best_performance: 0.7187    0.8277      0.7131  0.8213
    3 0 1 2         best_performance: 0.7884    0.8459      0.7877  0.8470
    1 0 2 3         best_performance: 0.9488    0.9533      0.9488  0.9548
    2 3 0 1         best_performance: 0.9486    0.9541      0.9490  0.9565
    0 1 2 3         best_performance: 0.8099    0.9466      0.8128  0.9503
    2 3 0 1 2 3 3 1 0 2 2 1         best_performance: 0.9482    0.9534      0.9483  0.9546
    2 3 0 1 2 3 3 -1 2 2 2 1        best_performance: 0.9454    0.9536      0.9450  0.9542
    etc..

    I would like to know what these numbers (e.g. 0.9454 0.9536 0.9450 0.9542) represent? Do they represent the hit? I your paper, you reported the MRR, H@1 and the H@10. But here, i see four numbers for each function. image

  2. It is possible to only test the model using the obtained scoring function f(h,r,t) for evaluation. It will be good if you can add in the read me the command to only test the model by specifying a scoring function

Thanks

yzhangee commented 4 years ago

Currently, this code is for searching. Based on the code file "base_model.py", the four values mean validation mrr, validation hit@10, testing mrr, testing hit@10 respectively.

I will add more information to the output string. Besides, you can evaluate the searched scoring function by the run_model function in the train.py file. The full evaluation code will be added in the future.