dingjingtao / SRNS

Our implementation of neurips'20 paper "Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering"
34 stars 7 forks source link

The results of ENMF #1

Open chenchongthu opened 3 years ago

chenchongthu commented 3 years ago

Good job! But we reran the experiment of ENMF and got different results, e.g., N@3=0.3124 and R@3=0.4016. Anything wrong? BTW, even with the provided parameters of ENMF in this paper, better performance can also be obtained.

dingjingtao commented 3 years ago

Good job! But we reran the experiment of ENMF and got different results, e.g., N@3=0.3124 and R@3=0.4016. Anything wrong? BTW, even with the provided parameters of ENMF in this paper, better performance can also be obtained.

Thx for your interest in our work! According to your comment, we have rerun the experiments in our paper (https://arxiv.org/abs/2009.03376).

  1. In Table 6, the best setting of ENMF on ML-1m is lr=0.01,reg=0.0001. Under our experimental setting (train/valid/test, 100 early_stop, max epoch=400, repeat 5 times), N@1=0.1845/0.1845/0.1856/0.1833/0.1916; N@3=0.3039/0.3006/0.3051/0.3035/0.3067; R@3=0.3920/0.3864/0.3940/0.3918/0.3922. These are close to our reported results in paper.

  2. However, after re-checking ENMF paper, we find that ENMF may work better under a large learning rate and dropout regularization, while in our previous experiments the learning rate for all methods was searched among [5e-4, 1e-2]. Therefore, we expand this range and add dropout. Our newly obtained best-setting for ENMF is lr=0.05 and dropout=0.9. The corresponding results are N@1=0.1865/0.1865/0.1943/0.1906/0.1908; N@3=0.3075/0.3081/0.3115/0.3089/0.3113; R@3=0.3963/0.3970/0.3976/0.3948/0.4010. These are close to your comment.

  3. Since ENMF works better with a much larger learning rate, we also extend its experiments on Pinterest data. The best setting is now lr=0.2. N@1=0.2632, N@3=0.4198, R@3=0.5352.

chenchongthu commented 3 years ago

We notice that on Pinterest data, a small embedding size "16" is adopted for all methods. As the embedding size increases, some methods (e.g., SRNS) perform worse according to your paper. Different from that, a bigger embedding size is more suitable for ENMF.