llava-rlhf / LLaVA-RLHF

Aligning LMMs with Factually Augmented RLHF
https://llava-rlhf.github.io/
GNU General Public License v3.0
324 stars 25 forks source link

Question about the optimization time #33

Closed JulioZhao97 closed 3 months ago

JulioZhao97 commented 3 months ago

Could you please tell me how long and how many GPUs are needed for training process?

Edward-Sun commented 3 months ago

Hi @JulioZhao97 the PPO training takes 1-2 days on 8 x A100-80G.