chenyuxin1999 / S-DPO

[NeurIPS 2024] The implementation of paper "On Softmax Direct Preference Optimization for Recommendation"
https://arxiv.org/abs/2406.09215
21 stars 2 forks source link

On Softmax Direct Preference Optimization for Recommendation

license ![world](assets/framework.png) Recommender systems aim to predict personalized rankings based on user preference data. With the rise of Language Models (LMs), LM-based recommenders have been widely explored due to their extensive world knowledge and powerful reasoning abilities. Most of the LM-based recommenders convert historical interactions into language prompts, pairing with a positive item as the target response and fine-tuning LM with a language modeling loss. However, the current objective fails to fully leverage preference data and is not optimized for personalized ranking tasks, which hinders the performance of LM-based recommenders. Inspired by the current advancement of Direct Preference Optimization (DPO) in human preference alignment and the success of softmax loss in recommendations, we propose Softmax-DPO (S-DPO) to instill ranking information into the LM to help LM-based recommenders distinguish preferred items from negatives, rather than solely focusing on positives. Specifically, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for LM-based recommenders, connected to softmax sampling strategies. Theoretically, we bridge S-DPO with the softmax loss over negative sampling and find that it has a side effect of mining hard negatives, which assures its exceptional capabilities in recommendation tasks. Empirically, extensive experiments conducted on three real-world datasets demonstrate the superiority of S-DPO to effectively model user preference and further boost recommendation performance while mitigating the data likelihood decline issue of DPO.

📋 Catalogue

⚙️ Preparations

Step 1. Install requirements.txt

Set up a virtualenv and install the pytorch manually. After that, install all the dependencies listed in the requirements.txt file by running the following command:

pip install -r requirements.txt

Our experiments have been tested on Python 3.9.7 with PyTorch 2.2.2+cu117.

⌛️ Quick Start

We provide a sample data of LastFM in ./data folder. Further processing can refer to data_interface.py.

By running the following command, you will start run Supervised Fine-Tuning on language model based recommenders.

python sft.py

By running the following command, you will start run Direct Preference Optimization on language model based recommenders.

python dpo.py

By running the following command, you will start run Softmax Direct Preference Optimization on language model based recommenders.

python softmax_dpo.py

By running the following command, you will start run Inference to get the performance metrics.

python inference.py