isamu-isozaki / fewshot-textual-inversion-low-memory

Script to train textual inversion model with low memory
MIT License
7 stars 1 forks source link

Low memory? #2

Open JustAnOkapi opened 1 year ago

JustAnOkapi commented 1 year ago

What is the minimum VRAM this requires?

isamu-isozaki commented 1 year ago

@JustAnOkapi I was able to do it with 6gb of ram but I'm pretty sure you can go lower by decreasing the batch size

isamu-isozaki commented 1 year ago

Also, I'm working on a more advanced version on https://github.com/isamu-isozaki/diffusers too if you are interested

JustAnOkapi commented 1 year ago

You can run textual inversion with just 6gb?!?

isamu-isozaki commented 1 year ago

Haha yup! I only had 6gb gpus so that's the reason I made this repo

isamu-isozaki commented 1 year ago

@JustAnOkapi The glide model always fits in 6gb since it first predicts just a 64x64 image then scales it up to 256x256 using another model. Both fits in 6gb with fp16. For the hugging face one from the link, I needed to put some models in the cpu but it's still faster than colab.