allenai / reward-bench

RewardBench: the first evaluation tool for reward models.
https://huggingface.co/spaces/allenai/reward-bench
Apache License 2.0
440 stars 52 forks source link

Add A New Generative Model #185

Closed ZhichaoWang970201 closed 2 months ago

ZhichaoWang970201 commented 2 months ago

Hi RewardBench Team 👋,

We have updated a 12B version generative model:

SF-Foundation/TextEval-OffsetBias-12B Our local evaluation metrics for the model is listed as bellow:

{ 'Chat': 0.9217877094972067, 'Chat Hard': 0.868421052631579, 'Safety': 0.9238221130221129, 'Reasoning': 0.937493179461996 }

How to Run Evaluation Script For this generative model, it's okay to evaluate it with the default scripts/run_generative.py script. Please notice that we need at least 8 gpus to run run_generative.py script, and export VLLM_WORKER_MULTIPROC_METHOD=spawn is required for vLLM multi-gpu inference.

export VLLM_WORKER_MULTIPROC_METHOD=spawn
cd reward-bench
python scripts/run_generative.py --model=SF-Foundation/TextEval-OffsetBias-12B --num_gpus 8

We would like to add this new generative model to the RewardBench LeaderBoard.

Thank you!

natolambert commented 2 months ago

Reproduced! Will be added when we restart next!