eric-mitchell / direct-preference-optimization

Reference implementation for DPO (Direct Preference Optimization)
Apache License 2.0
2.16k stars 176 forks source link

Using cross entropy loss to calculate DPO? #67

Open zachares opened 8 months ago

zachares commented 8 months ago

I think that you can replace most of the logic in _get_batch_logps with torch.nn.CrossEntropyLoss and then to get the average instead of the sum divide by the number of tokens that are not ignored. This would remove some of the bespoke code from the repo. Do you think this is a correct interpretation of the maths in the paper?

Tianranse commented 8 months ago

I have a similar question about get the average of the losses

ChenmienTan commented 7 months ago
  1. DPO relies on \log p(y|x)=\sum_t \log p(yt|x, y{<t}), where the average is not applied.
  2. Even if applied, the provided implementation is different with reduction='mean' in F.cross_entropy. You can imagine that F.cross_entropy averages over all tokens. Here the loss is averaged over the tokens in each sequence and then averaged over sequences.