Compared with offline DPO, the online DPO method performs better. Perhaps we can directly combine the sampling in PPO with the existing DPO implementation.
The existing implementation in TRL is here, but it does not seem to support the training of 70b models.
Compared with offline DPO, the online DPO method performs better. Perhaps we can directly combine the sampling in PPO with the existing DPO implementation. The existing implementation in TRL is here, but it does not seem to support the training of 70b models.