Open JustAnOkapi opened 1 year ago
@JustAnOkapi I was able to do it with 6gb of ram but I'm pretty sure you can go lower by decreasing the batch size
Also, I'm working on a more advanced version on https://github.com/isamu-isozaki/diffusers too if you are interested
You can run textual inversion with just 6gb?!?
Haha yup! I only had 6gb gpus so that's the reason I made this repo
@JustAnOkapi The glide model always fits in 6gb since it first predicts just a 64x64 image then scales it up to 256x256 using another model. Both fits in 6gb with fp16. For the hugging face one from the link, I needed to put some models in the cpu but it's still faster than colab.
What is the minimum VRAM this requires?