Closed Xinxinatg closed 1 year ago
Hi @Xinxinatg,
Thank you for your interest in our work. Our model can be easily deployed on a single 24 GB GPU. It means it can be deployed on a single A30 GPU as well. Thank You.
that's great news, thank you!
On Thu, 20 Jul 2023 at 11:31, Muhammad Maaz @.***> wrote:
Hi @Xinxinatg https://github.com/Xinxinatg,
Thank you for your interest in our work. Our model can be easily deployed on a single 24 GB GPU. It means it can be deployed on a single A30 GPU as well. Thank You.
— Reply to this email directly, view it on GitHub https://github.com/mbzuai-oryx/Video-ChatGPT/issues/27#issuecomment-1642973830, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARM2HOQOOYIFI5FMK4SDJBDXRCC6XANCNFSM6AAAAAA2QWFZ2A . You are receiving this because you were mentioned.Message ID: @.***>
You are most welcome!
Thanks for your wonderful work! I am wondering whether it would be possible to deploy it on a server with 2 A30 gpus each of which has 24 GB VRAM.