I am confused about the 'value function' in the instructGPT paper. In the paper, it said "As previously mentioned, for all PPO models we use a 6B RM and a 6B value function, and the latter is initialized from the former.". The reward model(RM) and value function model seem to be two seperate models. However, there are no evidences showing that value function is part of involvement of PPO RL training either in the objective function or in the other parts of the paper.
Hi,
I am confused about the 'value function' in the instructGPT paper. In the paper, it said "As previously mentioned, for all PPO models we use a 6B RM and a 6B value function, and the latter is initialized from the former.". The reward model(RM) and value function model seem to be two seperate models. However, there are no evidences showing that value function is part of involvement of PPO RL training either in the objective function or in the other parts of the paper.
Thanks