I think that you can replace most of the logic in _get_batch_logps with torch.nn.CrossEntropyLoss and then to get the average instead of the sum divide by the number of tokens that are not ignored. This would remove some of the bespoke code from the repo. Do you think this is a correct interpretation of the maths in the paper?
DPO relies on \log p(y|x)=\sum_t \log p(yt|x, y{<t}), where the average is not applied.
Even if applied, the provided implementation is different with reduction='mean' in F.cross_entropy. You can imagine that F.cross_entropy averages over all tokens. Here the loss is averaged over the tokens in each sequence and then averaged over sequences.
I think that you can replace most of the logic in
_get_batch_logps
withtorch.nn.CrossEntropyLoss
and then to get the average instead of the sum divide by the number of tokens that are not ignored. This would remove some of the bespoke code from the repo. Do you think this is a correct interpretation of the maths in the paper?