huggingface / alignment-handbook

Robust recipes to align language models with human and AI preferences
https://huggingface.co/HuggingFaceH4
Apache License 2.0
4.54k stars 393 forks source link

DPO loss on different datasets #110

Open wj210 opened 8 months ago

wj210 commented 8 months ago

In parallel with #38, tho i am relating to full training instead of lora.

When i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the case of ultrafeedback_binarised.

On my pref dataset (Eval loss) image

on original pref dataset (eval loss) image

train loss (mine) image

original image

reward margin (mine) image

original reward image

This huge diff in scale seems to occur when i use pref datasets that are sampled from the reference policy instead of in the case of ultrafeedback, where it is sampled from various policies.

Moreover this huge decrease in loss actually cause the DPO-ed model to perform worse across various benchmarks. Is there any intuition regarding this?