chenchongthu / EHCF

This is our implementation of EHCF: Efficient Heterogeneous Collaborative Filtering (AAAI 2020)
105 stars 28 forks source link

Would you please provide the validation set? #12

Closed rowedenny closed 3 years ago

rowedenny commented 3 years ago

Hi, I am trying to reproduce this paper on Pytorch, but I notice that the data repo does not have the dev set.

I try to randomly select one purchase within the training set as dev, but somehow find a big difference between dev and test on the metrics. I guess I may miss something important, thus would you mind providing the dev set?

chenchongthu commented 3 years ago

Hi, thanks for your interest in our work! As we have introduced in our paper, the second last record is used as validation data to tune the parameters. In our final experiment, we have included the validation data into the training data to train our model. So actually the last record in training data is used as validation data in our experiments.

rowedenny commented 3 years ago

I see, that may explain my issue. Thanks for your kind response! A quick head up from you: Is the training data refers totp_train = load_data(os.path.join(DATA_ROOT, 'buy.train.txt')) as the implementation shows? So in short, the dev is the last record of the 'buy.train' instead of other behaviors?


To investigate my possible issue, I start from the single behavior BPR, implemented in RecBole, since it may easily to make a comparison. However, even if I train the whole training set (including the training and dev), and then validate and test on the testing set, the ndcg@10 is 0.0177, which is lower than what is reported in your paper.

So would you mind recalling the hyperparameter for BPR, specifically for my experiment I set 64 dim for both user and item embedding, train the model with batch_size =512 for 30 epochs, and optimizer being adam with lr=1e-3, which is the best among [1e-2, 1e-3, 1e-4]

chenchongthu commented 3 years ago

Actually we have tried the BPR method implemented in RecBole in our other experiments, the performance is also not good. It seems that the BPR method implemented in RecBole has no L2 regulation, which would very easily lead to overfitting.

rowedenny commented 3 years ago

That totally makes sense. Thanks for your kind reminder!

I will follow your suggestion and see if L2 reg may push the performance closer to the metric reported in your paper.

rowedenny commented 3 years ago

Actually we have tried the BPR method implemented in RecBole in our other experiments, the performance is also not good. It seems that the BPR method implemented in RecBole has no L2 regulation, which would very easily lead to overfitting.

I tried with L2 regularization, the performance improves, yet still not good. I also try different initializations from embeddings, yet did not see a big difference. I guess it could be related to some computational difference between Tensorflow and Pytorch. In general, I will put it aside and move on.

The last question, would you please confirm this? the dev is the last record of the 'buy.train' or it could apprears in the other behavior data?

chenchongthu commented 3 years ago

yes, the dev is the last record of the 'buy.train'. By the way, we use the BPR method implemented in Librec, lr=1e-2, l2=0.01, 500 epochs.

rowedenny commented 3 years ago

Thank you so much!