uclaml / SPPO

The official implementation of Self-Play Preference Optimization (SPPO)
https://uclaml.github.io/SPPO/
Apache License 2.0
500 stars 62 forks source link

Scores and probability calcuations #15

Open namdw opened 4 months ago

namdw commented 4 months ago

https://github.com/uclaml/SPPO/blob/e524519cc87e9e48cd4da30588f7aa566638df4c/scripts/compute_prob.py#L39

From my understanding of the code, the score list here is the output from the blender.rank(*, return_scores=True) which should output the average relative score of the response in the index being better than other responses. Please correct me if wrong.

For example, given three responses, {y1, y2, y3}, the first element of the scores output by the blender model (s1, s2, 3) is, s1 = P(y1 > y2) + P(y1 > y3), disregarding the constant coefficient and P is general preference score function, not probability. [references from blender code and their paper]

Thus, subtracting two scores, i.e., s1 - s2, is also dependent on the third response y3 as well, which seems a bit different from what is described in the paper.

In summary, I feel it is more appropriate to use the score output from the blender with just two responses (although, I don't think this would make a significant difference in the performance), e.g.,

score = blender.rank([x], [[yj, yi]], return_scores=True)[0, 0]
prb[i][j] = 1 / (1 + np.exp(score))

(sorry for the badly coded example)

kaykyr commented 4 months ago

I don't understand the calcs yet, I don't know why, during my tests, the calculated probabilities are all coming as 0.5. Is this right?

kaykyr commented 4 months ago

I don't understand the calcs yet, I don't know why, during my tests, the calculated probabilities are all coming as 0.5. Is this right?

Sorry. It was my bad.

xukp20 commented 3 months ago

I have the same problem as @namdw.

It seems that LLM Blender uses its pair-wise RM to create the relative score table for all pairs (i, j) based on the logits of choosing "i" and "j" as the output. Since SPPO's code sets the "return_scores" param as True, LLM Blender will continue to do the Agg using "max_logits" strategy by default, which gives a single score for each candidate indicating the average relative score of that candidate being better than other responses.

So the returned score list of "rank" is related to the prompt, the chosen candidate, and all the candidates in the list, instead of the prompt and a pair of candidates which is presented as $s(y, y'; x)$ in the paper. Screenshot 2024-08-24 141825

I think the code from @namdw goes the right way, but to be aligned with the paper, should we also put $e^{score(i,j)}$ instead of $1$ on the top when calculating $prb(i, j)$?

MeckyWu commented 2 months ago

Hi @namdw @xukp20

I'm sorry for not getting back to you sooner. You are right. The correct implementation should be to calculate the win-rate matrix first and then take the mean over the row. The current implementation is taking the mean over the score matrix (the logistic of it is the win rate) and then feeding that into a BT model.

We will fix this and re-run experiments asap. The performance difference is expected to be negligible as the numerical difference in the win rate between these two methods is expected to be rather small.