I tried to reproduce the BoN result in the paper but failed to do so. I would like to ask if anyone has successfully reproduced it. And, I also list my steps to obtain the BoN result in this post.
First, let me show my BoN result.
This result should be compared with the single RM curves in Figure 3a of the paper. My BoN result is different from the paper's result in two ways: 1) the initial gold RM scores are different, and 2) the final gold RM scores are different.
The RM training loss is invariant to offsets, meaning that shifting an RM's outputs by any constant offset will not change the RM loss. Therefore the resulting RMs have different offsets and we should remove them. In addition, the author suggested that the centered rewards are further divided by the estimated standard deviation in https://github.com/tlc4418/llm_optimization/issues/12. I did this normalization in this repo, not in the open-assistant codebase as suggested by the author. I think effectively my way and the author's way are the same. Specifically, I replaced the get_reward function in https://github.com/tlc4418/llm_optimization/blob/f8a9ae6c5ff907deb206efffe9ddb3e62d279e85/src/reward_modeling/scoring/score.py#L18 with the following code:
def get_reward(
samples,
reward_models,
reward_tokenizer,
reward_device, # needed?
batch_size,
objective_function=None,
weight=None,
is_alpacafarm_rm=False,
normalize_reward=True,
):
if not isinstance(reward_models, list):
reward_models = [reward_models]
input = reward_tokenizer(
samples,
padding=True,
truncation=True,
max_length=MAX_LEN,
return_tensors="pt",
).to(reward_device)
all_rewards = []
for reward_model in reward_models:
out = []
for i in range(math.ceil(len(samples) / batch_size)):
batch_ixs = slice(i * batch_size, (i + 1) * batch_size)
input_ids = input.input_ids[batch_ixs]
attention_mask = input.attention_mask[batch_ixs]
output = reward_model(input_ids, attention_mask)
rewards = output.rewards if is_alpacafarm_rm else output.logits[:, 0]
out.extend(rewards)
all_rewards.append(torch.hstack(out))
if len(all_rewards) == 1:
all_rewards = all_rewards[0]
# add normalization here
if normalize_reward:
all_rewards = (all_rewards - reward_models[0].config.mean) / reward_models[0].config.std
return all_rewards, torch.empty_like(all_rewards)
# add normalization here
if normalize_reward:
for i in range(len(reward_models)):
all_rewards[i] = (all_rewards[i] - reward_models[i].config.mean) / reward_models[i].config.std
all_rewards = torch.stack(all_rewards, 0)
var = torch.var(all_rewards, dim=0)
if objective_function:
all_rewards = objective_function(all_rewards, weight)
return all_rewards, var
Training RMs: Run accelerate launch --config_file configs/accelerate_config.yaml src/reward_modeling/training/trainer_rm.py --configs defaults_rm rm-pythia-44m --rng_seed <seed> for 5 times, with <seed> being 1, 2, 3, 4, and 5. The final summary for seed 1 is
I tried to reproduce the BoN result in the paper but failed to do so. I would like to ask if anyone has successfully reproduced it. And, I also list my steps to obtain the BoN result in this post.
First, let me show my BoN result.
This result should be compared with the single RM curves in Figure 3a of the paper. My BoN result is different from the paper's result in two ways: 1) the initial gold RM scores are different, and 2) the final gold RM scores are different.
Here are the changes that I made to the codebase.
residual_dropout_lima: false
to https://github.com/tlc4418/llm_optimization/blob/f8a9ae6c5ff907deb206efffe9ddb3e62d279e85/configs/config_rm.yaml#L49 (suggested in https://github.com/tlc4418/llm_optimization/issues/5).alpaca_farm_pref
tocustom_hf_pref
because the paper used the latter dataset for RM training.The RM training loss is invariant to offsets, meaning that shifting an RM's outputs by any constant offset will not change the RM loss. Therefore the resulting RMs have different offsets and we should remove them. In addition, the author suggested that the centered rewards are further divided by the estimated standard deviation in https://github.com/tlc4418/llm_optimization/issues/12. I did this normalization in this repo, not in the open-assistant codebase as suggested by the author. I think effectively my way and the author's way are the same. Specifically, I replaced the
get_reward
function in https://github.com/tlc4418/llm_optimization/blob/f8a9ae6c5ff907deb206efffe9ddb3e62d279e85/src/reward_modeling/scoring/score.py#L18 with the following code:In addition, when training RMs, the RM scores should not be normalized, so let's put
rewards, _ = get_reward(samples, model, tokenizer, model.device, batch_size=128, normalize_reward=False)
in https://github.com/tlc4418/llm_optimization/blob/f8a9ae6c5ff907deb206efffe9ddb3e62d279e85/src/reward_modeling/training/trainer_rm.py#L341.gold_labelled_generations.map(_truncate_answers)
togold_labelled_generations.map(_truncate_answers, batched=True, batch_size=10)
in https://github.com/tlc4418/llm_optimization/blob/f8a9ae6c5ff907deb206efffe9ddb3e62d279e85/src/bon/run_bon_pipeline.py#L77My steps to obtain the result:
accelerate launch --config_file configs/accelerate_config.yaml src/reward_modeling/training/trainer_rm.py --configs defaults_rm rm-pythia-44m --rng_seed <seed>
for 5 times, with<seed>
being 1, 2, 3, 4, and 5. The final summary for seed 1 ispython src/bon/run_bon_pipeline.py models/rm-pythia-44m_seed{seed} --seeds 1,2,3,4,5 --ensembles
plt.xlabel("KL") plt.ylabel("proxy reward") plt.legend() plt.show()
gold scores
seeds = [1, 2, 3, 4, 5] for s in seeds: f = open(seed_fn_prefix + str(s) + seed_fn_suffix) bon_res = json.load(f)
xs, ys_gold = [], [] for entry in bon_res: xs.append(math.log(entry['n']) - (entry['n'] - 1) / entry['n']) ys_gold.append(entry['gold_score']) plt.plot(xs, ys_gold, label="seed " + str(s)) f.close() plt.ylim(0, 1.1) plt.xlabel("KL") plt.ylabel("gold reward") plt.legend() plt.show()