Open namdw opened 2 months ago
I don't understand the calcs yet, I don't know why, during my tests, the calculated probabilities are all coming as 0.5. Is this right?
I don't understand the calcs yet, I don't know why, during my tests, the calculated probabilities are all coming as 0.5. Is this right?
Sorry. It was my bad.
I have the same problem as @namdw.
It seems that LLM Blender uses its pair-wise RM to create the relative score table for all pairs (i, j) based on the logits of choosing "i" and "j" as the output. Since SPPO's code sets the "return_scores" param as True, LLM Blender will continue to do the Agg using "max_logits" strategy by default, which gives a single score for each candidate indicating the average relative score of that candidate being better than other responses.
So the returned score list of "rank" is related to the prompt, the chosen candidate, and all the candidates in the list, instead of the prompt and a pair of candidates which is presented as $s(y, y'; x)$ in the paper.
I think the code from @namdw goes the right way, but to be aligned with the paper, should we also put $e^{score(i,j)}$ instead of $1$ on the top when calculating $prb(i, j)$?
https://github.com/uclaml/SPPO/blob/e524519cc87e9e48cd4da30588f7aa566638df4c/scripts/compute_prob.py#L39
From my understanding of the code, the score list here is the output from the
blender.rank(*, return_scores=True)
which should output the average relative score of the response in the index being better than other responses. Please correct me if wrong.For example, given three responses, {y1, y2, y3}, the first element of the scores output by the blender model (s1, s2, 3) is, s1 = P(y1 > y2) + P(y1 > y3), disregarding the constant coefficient and P is general preference score function, not probability. [references from blender code and their paper]
Thus, subtracting two scores, i.e., s1 - s2, is also dependent on the third response y3 as well, which seems a bit different from what is described in the paper.
In summary, I feel it is more appropriate to use the score output from the blender with just two responses (although, I don't think this would make a significant difference in the performance), e.g.,
(sorry for the badly coded example)