HKUDS / RLMRec

[WWW'2024] "RLMRec: Representation Learning with Large Language Models for Recommendation"
https://arxiv.org/abs/2310.15950
Apache License 2.0
321 stars 36 forks source link

LLMs as recommenders #8

Closed duduke37 closed 2 months ago

duduke37 commented 4 months ago

Hello, I am very interested in the experiment of LLMs as recommers in your paper A.3. I would like to reproduce and try it. I wonder if you can provide this part of the code or provide a detailed introduction. Case study on LLMs-based reranking. The candidate items are retrieved by LightGCN.

Re-bin commented 3 months ago

Hi! 👋

Thanks for your interests! Here are some instructions for the re-rank experiments.

  1. Initially, we trained a LightGCN model using the Amazon dataset. Subsequently, we selected the top-30/35/40/45/50 items recommended by LightGCN as candidate items for re-ranking. We then evaluated the Recall and NDCG metrics for the re-ranked top-10 and top-20 items.

  2. In our approach, we leveraged the item title as the meta information for the items. The re-ranking process was guided by the prompts illustrated in Figure 9, which also incorporated historical interacted items. The re-ranking procedure was performed individually for each user, following which the overall performance was calculated for the entire dataset.

I hope the information provided is useful :)

Best regards, Xubin