HKUDS / XRec

[EMNLP'2024] "XRec: Large Language Models for Explainable Recommendation"
http://arxiv.org/abs/2406.02377
Apache License 2.0
85 stars 6 forks source link
explainable-ai large-language-models recommender-systems

XRec: Large Language Models for Explainable Recommendation

PyTorch implementation for XRec: Large Language Models for Explainable Recommendation

XRec: Large Language Models for Explainable Recommendation\ Qiyao Ma, Xubin Ren, Chao Huang\ Preprint 2024*


This paper presents a model-agnostic framework, XRec, that integrates the graph-based collaborative filtering framework with Large Language Models (LLMs) to generate comprehensive explanations for recommendations. By leveraging the inherent collaborative user-item relationships and harnessing the powerful textual generation capabilities of LLMs, XRec establishes a strong connection between collaborative signals and language semantics through the utilization of a Mixture of Experts (MoE) adapter.

XRec

Environment

Run the following command to install dependencies:

pip install -r requirements.txt

Datasets

We utilize three public datasets: Amazon-books amazon, Google-reviews google, Yelp yelp. To generate user/item profile and explanations from scratch, enter your OpenAI API Key in line 7 of these files: generation/{item_profile/user_profile/explanation}/generate_{profile/exp}.py.

Usage

Each of the below commands can be run independently, since the finetuned LLM and generated explanations are provided within the data. Prepare your Hugging Face User Access Token for downloading Llama 2 model.

Supported datasets: amazon, google, yelp

Example

Below is an example of generating explanation for a specific user-item recommendation using the yelp dataset.

Input

Output

Code Structure

├── README.md
├── data (amazon/google/yelp)
│   ├── data.json                         # user/item profile with explanation
│   ├── trn/val/tst.pkl                   # separation of data.json
│   ├── total_trn/val/tst.csv             # user-item interactions
│   ├── user/item_emb.pkl                 # user/item embeddings
│   ├── user/item_converter.pkl           # MoE adapter
│   ├── tst_pred.pkl                      # generated explanation
│   └── tst_ref.pkl                       # ground truth explanation
├── encoder
│   ├── models                            # GNN structure
│   ├── utils
│   └── train_encoder.py                  # derive user/item embeddings
├── explainer
│   ├── models
│   │   ├── explainer.py                  # XRec model
│   │   └── modeling_explainer.py         # modified PyTorch LLaMA model
│   ├── utils
│   ├── main.py                           # employ XRec  
│   └── sample.py                         # see samples of generated explanations
├── generation
│   ├── instructions                      # system prompts for user/item profile and
│   ├── explanations
│   ├── item_profile                      # generate item profile
│   │   ├── item_prompts.json
│   │   ├── item_system_prompt.json
│   │   └── generate_profile.py
│   ├── user_profile                      # generate user profile
│   │   ├── user_prompts.json
│   │   ├── user_system_prompt.json
│   │   └── generate_profile.py
│   └── explanation                       # generate ground truth explanation
│       ├── exp_prompts.json
│       ├── exp_system_prompts.json  
│       └── generate_exp.py
└── evaluation
    ├── main.py
    ├── metrics.py   
    └── system_prompt.txt                  # system prompt for GPTScore

Citation

If you find XRec helpful to your research or applications, please kindly cite:

@article{ma2024xrec,
  title={XRec: Large Language Models for Explainable Recommendation},
  author={Ma, Qiyao and Ren, Xubin and Huang, Chao},
  journal={arXiv preprint arXiv:2406.02377},
  year={2024}
}