Open VoyageWang opened 2 months ago
Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the batch size
and increases the gradient_accumulate_steps
, it is possible to train ControlCap on 3090 24G.
Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the
batch size
and increases thegradient_accumulate_steps
, it is possible to train ControlCap on 3090 24G.
Thank you for your quick reply. I want to know how to change the gradient_accumulate_steps
in the configs or somewhere. I didn't find the specific parameters corresponding to this
Hi, there!
Thanks for your nice work! I would like to know the minimal resources needed to train the overall pipeline of your model. I have 8 NVIDIA 3090 GPUs with 24GB, is it enough?