hyp1231 / AmazonReviews2023

Scripts for processing the Amazon Reviews 2023 dataset; implementations and checkpoints of BLaIR: "Bridging Language and Items for Retrieval and Recommendation".
MIT License
86 stars 10 forks source link

Amazon Reviews 2023

[šŸŒ Website] Ā· [šŸ¤— Huggingface Datasets] Ā· [šŸ“‘ Paper] Ā· [šŸ”¬ McAuley Lab]


This repository contains:

Recommendation Benchmarks

Based on the released Amazon Reviews 2023 dataset, we provide scripts to preprocess raw data into standard train/validation/test splits to encourage benchmarking recommendation models.

More details here -> [datasets & processing scripts]

BLaIR

BLaIR, which is short for "Bridging Language and Items for Retrieval and Recommendation", is a series of language models pre-trained on Amazon Reviews 2023 dataset.

BLaIR is grounded on pairs of (item metadata, language context), enabling the models to:

More details here -> [checkpoints & code]

Amazon-C4

Amazon-C4, which is short for "Complex Contexts Created by ChatGPT", is a new dataset for the complex product search task.

Amazon-C4 is designed to assess a model's ability to comprehend complex language contexts and retrieve relevant items.

More details here -> [datasets & code]

Contact

Please let us know if you encounter a bug or have any suggestions/questions by filling an issue or emailing Yupeng Hou (@hyp1231) at yphou@ucsd.edu.

Acknowledgement

If you find Amazon Reviews 2023 dataset, BLaIR checkpoints, Amazon-C4 dataset, or our scripts/code helpful, please cite the following paper.

@article{hou2024bridging,
  title={Bridging Language and Items for Retrieval and Recommendation},
  author={Hou, Yupeng and Li, Jiacheng and He, Zhankui and Yan, An and Chen, Xiusi and McAuley, Julian},
  journal={arXiv preprint arXiv:2403.03952},
  year={2024}
}

The recommendation experiments in the BLaIR paper are implemented using the open-source recommendation library RecBole.

The pre-training scripts refer a lot to huggingface language-modeling examples and SimCSE.