basujindal / stable-diffusion

Optimized Stable Diffusion modified to run on lower GPU VRAM
Other
3.14k stars 469 forks source link

The command for tx2img isnt working #172

Open Quil180 opened 1 year ago

Quil180 commented 1 year ago

Hi, the command isnt working for me. It is returning the following: Global seed set to 27 Loading model from models/ldm/stable-diffusion-v1/model.ckpt Global Step: 470000 UNet: Running in eps-prediction mode Traceback (most recent call last): File "optimizedSD/optimized_txt2img.py", line 204, in <module> model = instantiate_from_config(config.modelUNet) File "d:\stable diffusion\stable-diffusion\ldm\util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "d:\stable diffusion\stable-diffusion\optimizedSD\ddpm.py", line 363, in __init__ self.model2 = DiffusionWrapperOut(self.unetConfigDecode) File "d:\stable diffusion\stable-diffusion\optimizedSD\ddpm.py", line 318, in __init__ self.diffusion_model = instantiate_from_config(diff_model_config) File "d:\stable diffusion\stable-diffusion\ldm\util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "d:\stable diffusion\stable-diffusion\optimizedSD\openaimodelSplit.py", line 754, in __init__ ) if not use_spatial_transformer else SpatialTransformer( File "D:\stable diffusion\stable-diffusion\optimizedSD\splitAttention.py", line 259, in __init__ [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) File "D:\stable diffusion\stable-diffusion\optimizedSD\splitAttention.py", line 259, in <listcomp> [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) File "D:\stable diffusion\stable-diffusion\optimizedSD\splitAttention.py", line 218, in __init__ self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention File "D:\stable diffusion\stable-diffusion\optimizedSD\splitAttention.py", line 167, in __init__ nn.Linear(inner_dim, query_dim), File "C:\Users\youse\anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\linear.py", line 85, in __init__ self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs)) RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.

I'm new to this so I, to be honest, have no idea what could possibly do this in return. Incase it is needed, this is what I inputted: python optimizedSD/optimized_txt2img.py --prompt "Cyberpunk style image of a Tesla car reflection in rain" --H 512 --W 512 --seed 27 --n_iter 2 --n_samples 1 --ddim_steps 50 I have a GTX 1060 3gb.

basujindal commented 1 year ago

Hi, according to the error, you ran out of GPU memory. Can you close the process which might be using your GPU VRAM and try again? Cheers!