agi-templar / Stable-Alignment

Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
https://arxiv.org/pdf/2305.16960.pdf
Other
335 stars 18 forks source link

Implementation of RRHF #5

Closed Guochry closed 11 months ago

Guochry commented 1 year ago

Firstly, great thanks to your amazing work on alignment! Besides, I am curious about the implementation of RRHF in your first experiment: is it initialized by the open source Wombat models provided by RRHF team, or the model you trained by RRHF method?

agi-templar commented 1 year ago

We used the open-sourced implementation and trained the model on our dataset for a fair comparison (so that the data factor is controlled). We find the main issue of RRHF is, if the ratings are the same for multiple completions, the loss calculation based on ranking will be wrong (how to decide their rankings since their scores are the same)? Some other users have found the same issue (see https://github.com/GanjinZero/RRHF/issues/25).

Stable Alignment does not have this issue as we use the difference in the scores to modulate the margin. So if the difference is zero (same score, same ranking), the margin for contrastive learning will be canceled. And the floor operation in the loss can make sure we are not overoptimizing because those already aligned samples will be skipped during learning (loss = 0, by torch.max).

Guochry commented 1 year ago

Great thanks again