RuihongQiu / DuoRec

Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.
MIT License
96 stars 19 forks source link

Cannot reproduce the same result posted in the paper. CE loss v.s. BPR loss. Nice try. #7

Open peggy95 opened 1 year ago

peggy95 commented 1 year ago

I use the source code provided here. Download the ml-1m dataset, just run sh duorec.sh I can't get the result as the paper.

More interesting, the baseline use BPR loss as the main objective which takes 1 positive and 1 negative item each time while this work introduces CE loss which takes 1 positive and all of the rest as negative items. It is unfair.

When I change the main objective to BPR loss which is the common used loss for former papers, the performance is even lower than CL4SRec.

ZhengxiangShi commented 1 year ago

Cannot Reproduce the same results in the paper

I have also tried to reproduce the results. I got this by run

python run_seq.py --dataset='ml-1m' --train_batch_size=256 lmd=0.1 --lmd_sem=0.1 --model='DuoRec' --contrast='us_x' --sim='dot' --tau=1

However, the test results are:

{'recall@5': 0.1664, 'recall@10': 0.256, 'recall@20': 0.3528, 'recall@50': 0.5033, 'mrr@5': 0.0896, 'mrr@10': 0.1016, 'mrr@20': 0.1084, 'mrr@50': 0.1131, 'ndcg@5': 0.1086, 'ndcg@10': 0.1376, 'ndcg@20': 0.1621, 'ndcg@50': 0.1919, 'precision@5': 0.0333, 'precision@10': 0.0256, 'precision@20': 0.0176, 'precision@50': 0.0101}

The results reported in the paper are as follows:

If you could further clarify this, much apprecaited!

Thanks a lot!