boheumd / MA-LMM

(2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding
https://boheumd.github.io/MA-LMM/
MIT License
247 stars 27 forks source link

Is 8 x 4090 (24gb) gpu enough to reproduce your code? #15

Closed longmalongma closed 6 months ago

longmalongma commented 6 months ago

Thank you for your work, I only have 8 x 4090 (24gb) gpu, is this resource enough to reproduce your code?

boheumd commented 6 months ago

I have not tried on the 24GB GPU before. But I think you can reduce the batch_size while increasing the accum_grad_iters and keeping the total_batch_size the same, so that less gpu memory is needed. For example, in this https://github.com/boheumd/MA-LMM/blob/main/run_scripts/breakfast/train.sh#L18

longmalongma commented 6 months ago

Ok, thanks for your reply! I see!