AUTOMATIC1111 / stable-diffusion-webui-aesthetic-gradients

Aesthetic gradients extension for web ui
431 stars 66 forks source link

[Bug] VRAM Error #11

Open misakitchi opened 1 year ago

misakitchi commented 1 year ago

I have an old GPU (GTX 750 Ti) with only 2Go VRAM I use option --lowvram I can run SD up to 512x1024 images

But when i activate "aesthetic-gradients" extensions and try to do an image, i have an VRAM error! :(

Traceback (most recent call last): File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\modules\ui.py", line 185, in f res = list(func(*args, *kwargs)) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\webui.py", line 56, in f res = func(args, kwargs) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\modules\txt2img.py", line 48, in txt2img processed = process_images(p) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\modules\processing.py", line 423, in process_images res = process_images_inner(p) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\modules\processing.py", line 508, in process_images_inner uc = prompt_parser.get_learned_conditioning(shared.sd_model, len(prompts) [p.negative_prompt], p.steps) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\modules\prompt_parser.py", line 138, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 558, in get_learned_conditioning c = self.cond_stage_model(c) File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\modules\sd_hijack.py", line 338, in forward z1 = self.process_tokens(tokens, multipliers) File "C:\Bin\AI\SD\auto1111\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-gradients\aesthetic_clip.py", line 211, in call model = copy.deepcopy(aesthetic_clip()).to(device) File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 927, in to return self._apply(convert) File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) [Previous line repeated 3 more times] File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply param_applied = fn(param) File "C:\Bin\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 925, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 2.00 GiB total capacity; 1.72 GiB already allocated; 0 bytes free; 1.75 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

i try all size image: 64x64, 512x512...

Can you please do something about this error? Thanks! :)