Vision-CAIR / MiniGPT4-video

Official code for MiniGPT4-video
https://vision-cair.github.io/MiniGPT4-video/
BSD 3-Clause "New" or "Revised" License
439 stars 46 forks source link

GPU resource for pretraining and instruction tuning #3

Open 2000ZRL opened 2 months ago

2000ZRL commented 2 months ago

What an excellent work! Could you please share the GPU requirement (number and memory) for pretraining and instruction tuning? Thanks.

KerolosAtef commented 2 months ago

Hello @2000ZRL Thank you for your interest in our work.

For Video text datasets: For llama2: You can use A100 with 80GB with batch size=4 or V100 with batch size=1 (Minimum GPU RAM is 32GB)

For Mistral: You can only use A100 with 80GB with batch size=1 (Minimum GPU RAM is 80 GB)

2000ZRL commented 2 months ago

Thanks for your reply? Could you please also tell me the training time cost for different model variants, e.g., llama2/mistral