chenchongthu / ENMF

This is our implementation of ENMF: Efficient Neural Matrix Factorization (TOIS. 38, 2020). This also provides a fair evaluation of existing state-of-the-art recommendation models.
MIT License
149 stars 28 forks source link

Can not reproduce the results of ENMF on ml-lcfn dataset as claimed in README #4

Closed gimpong closed 3 years ago

gimpong commented 3 years ago

Hi! You have done a good work! I have been trying your code these days and got some expected results, but I found it hard to reproduce the results on ml-lcfn dataset as claimed in README.

Here are my trials with the provided code:

  1. When using default hyperparameters of provided code, i.e., dropout keep_prob=0.7 and negative weight=0.1, the best results I got were: NDCG@5=0.22135453305703484, NDCG@10=0.22871178869000672, NDCG@20=0.2525169010557999.
  2. When using the suggested hyperparameters in README, i.e., dropout keep_prob=0.5 and negative weight=0.5, the best results I got were: NDCG@5=0.24160408294952565, NDCG@10=0.24239649929731227, NDCG@20=0.25935423043524214.
  3. When setting the hyperparameters as dropout keep_prob=0.7 and negative weight=0.5 (which is the best pair I have tried), the best results I got were: NDCG@5=0.24156951242563474, NDCG@10=0.24269257187356102, NDCG@20=0.26141558703625023.

Note that none of above meets the promising results in README, i.e., NDCG@5=0.2457, NDCG@10=0.2475, NDCG@20=0.2656. Could you help me figure out how to reproduce them?

chenchongthu commented 3 years ago

Hi, thank you for your interest in our work. What is your setting of embedding size? For a fair comparison, we also set the embedding size as 128, which is utilized in the LCFN work.

gimpong commented 3 years ago

Thank you, I got it. The results were reproduced.