OpenLMLab / MOSS-RLHF

MOSS-RLHF
Apache License 2.0
1.18k stars 89 forks source link

Has anyone compared this training framework to TRL? #54

Open StarrySeas1 opened 1 month ago

StarrySeas1 commented 1 month ago

TRL PPO implementation is simpler than this, and takes up less memory. This framework has an additional value contribution network. I don't know which framework is more stable and effective.

refrain-wbh commented 2 weeks ago

While TRL indeed reduces one value function network, it may be relatively more challenging to train. That is because the policy and value function share parameters. On the other hand, the TRL library's code encapsulates a lot of optimizations, whereas our code has no additional optimization methods, making it easier to understand and modify.