Open Hariharan2608 opened 1 year ago
It depends on what models you need to use, check out this readme here [https://github.com/microsoft/visual-chatgpt#gpu-memory-usage]()
I already implement the Google Colab version and I only choose two models T2I and ImageCaption, everything is fine, you can try it on my Colab notebook, checkout this colab version
You're better off using one of the forks/colab versions that people have got running. To run this project without modification requires 8x GPUs.
Let me quote myself from here:
I did get this running in the end, on 8x NVIDIA A100 40 Gb, but various bugs prevented it from fully working (for one, the masking for inpainting wasn't working, not sure if the fork uses/fixes this model?).
"Anything goes, so long as you have 70 GB of VRAM..."
Not quite, I tried running it on 8x NVIDIA Tesla V100, 16 Gb. But got OOM on one of the cards when trying to generate an image. I.e. the cards need to be big enough to run the models allocated to them.
...Looking forward to the multimodal APIs coming soon, as you say.
I already implement the Google Colab version and I only choose two models T2I and ImageCaption, everything is fine, you can try it on my Colab notebook, checkout this colab version
兄弟你api key露出来了
@tzcnbo 我靠,疏忽了,谢谢兄弟的提醒
ניתן למחוק את ה- API שבחשבון ולייצר אותו מחדש, למקרה שזה יהיה נחוץ.
How many GB of GPU is required to run this project?