LambdaLabsML / examples

Deep Learning Examples
MIT License
805 stars 103 forks source link

How are you able to host your model on GPU with < 16 GB RAM #63

Open matemato opened 1 year ago

matemato commented 1 year ago

Hello,

I am trying to reproduce your txt to pokemon work and try to compare it to my model with bulbapedia descriptions. When I finish my work I would love to host it on Google Colab like you did. When I try to generate images like you did, I get CUDA out of memory with GPUs with < 16 GB RAM, but I see that you have your model hosted with GPU Tesla T4 (16 GB RAM) on hugging face. And I see that you created a google colab as well (although it doesn't work anymore) that ran with said GPU.

Does anyone have any idea how to generate images of size 512 but with GPUs available on Google Colab free tier?

Thank you so much for you help!