SafeAILab / RAIN

[ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning
https://arxiv.org/abs/2309.07124
BSD 2-Clause "Simplified" License
71 stars 4 forks source link

test on HH-RLHF #3

Open LoverLost opened 3 months ago

LoverLost commented 3 months ago

I see the code and find that in the HH-RLHF dataset you use the red-team data for test. I want to know how the test scores are calculated? I didnt find ground-truth in the red-team dataset. How are the scores for harmless and helpful calculated in the paper?

hongyanz commented 3 months ago

We use GPT-4's evaluation as the ground-truth. We also show that GPT-4 and human share similar evaluation results in the paper.

shanpoyang654 commented 2 months ago

We use GPT-4's evaluation as the ground-truth. We also show that GPT-4 and human share similar evaluation results in the paper.

I got an output file named res_0.json which contains outputs of LLM. Do I need to put the outputs into GPT4 API to get the evaluation as the groundtruth? It means that there isn't an evaluation process in the code now, right? Thank you for your code and effort and hope for your reply!