allegro / allRank

allRank is a framework for training learning-to-rank neural models based on PyTorch.
Apache License 2.0
865 stars 119 forks source link

question about the result #30

Closed 1245244103 closed 3 years ago

1245244103 commented 3 years ago

I standized the features and disposed the config.json as follow: QQ图片20210415144124 I run on the FOLD 1 of MSLR-WEB30K and just get 0.502 of ndcg@5 on the test set. Is there any procedure that i miss? Thanks a lot!

PrzemekPobrotyn commented 3 years ago

we are currently working on a reproducibility guide and will post it here on github soon, hopefully it will answer your question

1245244103 commented 3 years ago

we are currently working on a reproducibility guide and will post it here on github soon, hopefully it will answer your question

It is a good news. I am willing to wait for it!

sadaharu-inugami commented 3 years ago

Did you use the latest 1.4.1 version to produce these results? In this version, we changed the way we calculate NDCG. Following LightGBM and XGBoost's default behaviour (AFAIK, this was the way the results were calculated e.g. in the LambdaLoss paper), we now set NDCG = 1.0 for lists with no relevant items, where IDCG == 0.0.

sadaharu-inugami commented 3 years ago

Also, I can confirm that we are working on the reproducibility guide and will be posting it very soon - maybe as soon as today or tomorrow.