CompVis / taming-transformers

Taming Transformers for High-Resolution Image Synthesis
https://arxiv.org/abs/2012.09841
MIT License
5.79k stars 1.15k forks source link

Parameters to use less memory #77

Open kstroevsky opened 3 years ago

kstroevsky commented 3 years ago

Hello! Now I'm trying to generate some pictures by vqgan+clip on my laptop with 2GB VRAM. Obviously I get some errors of out of memory. That's about my question.

What's the parameters of models can give me an opportunity to use less memory to the detriment of, for example, time or performance? Are there in configs is .yaml files?

adeptflax commented 3 years ago

you would probably need 10 - 20 GB of ram. I don't think there's any way to reduce this. I use vast.ai to train models. RTX 3090 seems to be the gpu to train with.

danilaplee commented 2 years ago

i'am new around here with my rtx 3080 and about 2 days on this tech, but i still think that using a 7gb for model.decode is kinda vicious each time :) maybe we can look together into this issue?