ShomyLiu / Neu-Review-Rec

A Toolkit for Neural Review-based Recommendation models with Pytorch.
http://shomy.top/2019/12/31/neu-review-rec/
168 stars 54 forks source link

Did you success to reproduce the number in the paper? #14

Closed sh0416 closed 3 years ago

sh0416 commented 3 years ago

Hi, I was curious about the result of this framework. I also do this kind of work, but I failed to reproduce their work because of the lack of experimental setting such as preprocessing. I only reproduce “Hidden factor and hidden topic” in 2013 successfully. If you think you successfully reproduce their result, could you share your baseline performance and their preprocessing step? Thanks

ShomyLiu commented 3 years ago

Hi, as denoted in the README.md:

Note that the review processing methods are usually different among these papers (e.g., the vocab, padding), which would influence their performance. In this repo, to be fair, we adopt the same pre-poressing approach for all the methods. Hence the performance may be not consistent with the origin papers.

According to my reproducing experience, different preprocessing methods would lead to different results. That's one of the reasons we release this framework. In addition, in terms of different datasets, the performance would vary obviously. Hence, in my opinion, choosing the results that you reproduce is a better way.

sh0416 commented 3 years ago

Thanks for your reply. One additional question is whether you get a consistent result among the papers. In my experience, I don't have any other improvement using neural approach.. Did you get this one? Some paper could not properly validate their idea. For example, one paper introduce bias term for their method and the other baselines doesn't use bias term, which is the reason that their method is better than the other baselines. Therefore, I think you have a difficult time to produce a reasonable result. I think if you have this results and publish them as a table form, it could be an official reliable benchmark score for the future work. What do you think about this suggestion? Again, thanks for your work.

ShomyLiu commented 3 years ago

Hi!

  1. The bias item in the rating prediction is indeed a "magic" feature, which could boost the performance. But I think it's a choice for the rating prediction layer. Hence, if their ablation study is enough to validate their effectiveness, it would be fine.

  2. I don't have any other improvement using neural approaches

Some neural approaches could obtain better results than traditional methods, such as CARL/DAML, NARRE. Of course, there are some methods that did not perform well as reported in the papers.

  1. I think if you have this results and publish them as a table form, it could be an official reliable benchmark score for the future work. What do you think about this suggestion?

Thanks for the suggestion. We are planning to write a survey about the neural approaches in review-based recommendation, and we will try to report the results in the survey.

sh0416 commented 3 years ago

Thanks! It will be really helpful for many researchers including me :)