yusanshi / news-recommendation

Implementations of some methods in news recommendation.
MIT License
241 stars 50 forks source link

Abnormal performance of NAML #7

Closed Veason-silverbullet closed 3 years ago

Veason-silverbullet commented 3 years ago

Hi, @yusanshi

I have a puzzle about the performance of NAML.

I found that NAML significantly outperforms LSTUR and NRMS by a large margin. The same pattern was also found in my implementation code of NAML, LSTUR, and NRMS. I wonder whether you know the reason? The paper claimed that NAML is suboptimal compared to LSTUR and NRMS (https://www.aclweb.org/anthology/2020.acl-main.331.pdf).

Many thanks!

yusanshi commented 3 years ago

Because of additional information (i.e., category, subcategory, abstract...)?

Just a guess but I think it's worth a try 😁(e.g., only use the title in NAML, add category information to NRMS...)

yusanshi commented 3 years ago

As for the results in the paper, I don't know... It may because of the difference in implementations but we don't know the code they use🤣

Veason-silverbullet commented 3 years ago

@yusanshi Your guess is right! I wrote an email to the MIND team and just got the response for this issue. Here is part of quotation of their response:

"While, in the MIND paper, for fair comparison with other methods, only news title is used in NAML. We think this is the reason that NAML performs much better than NRMS and LSTUR in your experiments while not in the MIND paper."

Thanks for your reply. This issue can be closed.