pkuliyi2015 / sd-webui-stablesr

StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
https://iceclear.github.io/projects/stablesr/
Other
1.02k stars 55 forks source link

CUDA Out of Memory #4

Open Hansynily opened 1 year ago

Hansynily commented 1 year ago

Even with lowest tile size applied, OOM happens when I'm trying to upscale a x512 image. Tile Diffusion and TileVAE applied. I think this is a dead end for 4GB vram users? Or is it just only me Tho it works with StableSR script disabled.

GPU: GTX 1650 "--lowvram" on argument

Hansynily commented 1 year ago

` Error completing request | 0/20 [00:00<?, ?it/s] Arguments: ('task(ffdbc1l7dit921m)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=512x512 at 0x27DABAD0940>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 2, 1.5, 0.3, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 9, True, 'Mixture of Diffusers', False, True, 1024, 1024, 64, 64, 32, 1, 'None', 1.1, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 512, 48, True, True, False, False, '

\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'stablesr_webui_sd-v2-1-512-ema-000117.ckpt', 1.1, True, 'Wavelet', False) {} Traceback (most recent call last): File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 223, in sample_custom samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling return func() File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, *extra_args) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 135, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 243, in wrapper return fn(*args, *kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\mixtureofdiffusers.py", line 131, in apply_model_hijack x_tile_out = shared.sd_model.apply_model_original_md(x_tile, t_tile, c_tile) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda args, kwargs: self(*args, kwargs)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 92, in unet_forward self.spade_layers.to(x.device) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 3 more times] File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 3.36 GiB already allocated; 0 bytes free; 3.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, *kwargs)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(args, *kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\img2img.py", line 180, in img2img processed = modules.scripts.scripts_img2img.run(p, args) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 408, in run processed = script.run(p, *script_args) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run result: Processed = processing.process_images(p) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 526, in process_images res = process_images_inner(p) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 680, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 238, in sample_custom self.stablesr_model.struct_cond_model.to(device=first_param.device) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already allocated; 0 bytes free; 3.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

`

pkuliyi2015 commented 1 year ago

I understand your situation and will try to make it support a 4GB card. However, one of my friend has already succeeded. I will ask him for some experience.

momognu commented 1 year ago

encode title size : 512 decode title size : 48 one more try

xueqing0622 commented 1 year ago

My 1660TI only use v2-1_512-ema-prune OutOfMemoryError Error completing request Arguments: ('task(fmaux8aa7tozeq2)', 0, '(best quality), ((masterpiece)), 1girl, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration,art by Guweiz , (photorealistic:0.8), (outline:0.8)', 'lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck', [], <PIL.Image.Image image mode=RGBA size=448x640 at 0x18D7F2E2770>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 15, 1.5, 0.5, -1.0, -1.0, 0, 0, 0, False, 0, 640, 448, 1, 1, 0, 32, 0, '', '', '', [], 13, False, {'ad_model': 'face_yolov8n_v2.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': True, 'ad_inpaint_width': 256, 'ad_inpaint_height': 256, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 512, 64, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, 7, 100, 'Constant', 0, 'Half Cosine Up', 3.5, 4, <controlnet.py.UiControlNetUnit object at 0x0000018D7F356D70>, 'ALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nNONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nOUTALL:0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nNINALL:0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1\nlbbody:1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nlbcloth:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\nlbcolor:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nlbaction:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\nbg: 1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\nxqcloth:1, 0,0,1,1,1,1, 1, 1,1,1,0,0,0,0,1,1\nxqface:0 ,0,1,1,1,0,0, 0, 0,1,1,1,1,0,0,0,0\nxqface1:0 ,0,1,1,1,0,0, 0, 0.5,1,1,1,1,0.5,0.5,0.5,0.5\nxqface2:0 ,0,1,1,1,0,0, 0, 1,1,1,1,1,1,1,0,0\nxqface3:0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0\nxqface4:0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1\nxqface5: 0.15,0.15,0.15,0.15,0.15,0.15,0.15,0.15,1,1,1,1,1,1,1,1,1\nxqface6:0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0\nxqpose:1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nNOUTS:1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nxqclo0:0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nxqclo1:1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0\nxqstyle:0,0,0,0,0,1,1,0,1,0,0,1,1,1,1,1,1\nxqstyle1:0,1,1,1,0,0,0,1,1,1,1,1,1,1,1,1,1\nxqstyle2:0,0,0,0,0,1,0,0,1,1,1,1,1,1,1,1,1\nNINS:0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1\nnfaces:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nnface:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\nwear:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\nfaces:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\npose:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\npaint:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nchar:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\nlowover:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nLyCOFACE:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0.8,0.5,1,1,1,1\nstyleh:1,0.5,0.5,0.5,0.5,0.5,0.5,0.5,1,1,1,1,1,1,1,1,1\nstyle:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nN-5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,1,1,1,1,1,1,1,1,1,1\nOUT-5:1,1,1,1,1,1,1,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nIN-7:0.3,0.3,0.3,0.3,0.3,0.3,0.3,1,1,1,1,1,1,1,1,1,1\nNINXQ: 0.15,0.15,0.15,0.15,0.15,0.15,1,1,1,1,1,1,1,1,1,1,1\nOUT-7:1,1,1,1,1,1,1,0.3,0.3,0.3,0.3,0.3,0.3,0.3,0.3,0.3,0.3\nIN-3:0.7,0.7,0.7,0.7,0.7,0.7,0.7,1,1,1,1,1,1,1,1,1,1\nOUT-3:1,1,1,1,1,1,1,0.7,0.7,0.7,0.7,0.7,0.7,0.7,0.7,0.7,0.7\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nNIND:1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,1,1\nNMIDD:1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1\nNOUTD:1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1\nNOUTALL:1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0\nBASENINALL:1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, '

\n', True, True, '', '', True, 50, True, 1, 0, False, '', '', '', '', -1, False, False, False, False, '', '', '', '', 222, False, False, False, False, False, 0, 0, '把剧本填写在这里~~(1 girl),(1 boy),(2 people),', '', '', '', 333, False, False, True, False, 1, 0, 0, 0, 0, 'Koyori - 只有我能进入的隐藏迷宫(插画师)', 'Digital Age - 数字时代', False, False, '', '', '', '', 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50, 'stablesr_webui_sd-v2-1-512-ema-000117.ckpt', 2, True, 'Wavelet', False, 0, 0, 512, 512, False, True, False, False, 0, 1, False, 1, True, True, False, False, ['left-right', 'red-cyan-anaglyph'], 2.5, 'polylines_sharp', 0, False, False, False, False, False, False, 'u2net', False, True, False, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "D:\SD2\extensions\sd-webui-stablesr\scripts\stablesr.py", line 223, in sample_custom samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) File "D:\SD2\modules\sd_samplers_kdiffusion.py", line 383, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "D:\SD2\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling return func() File "D:\SD2\modules\sd_samplers_kdiffusion.py", line 383, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "D:\SD2\py310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\SD2\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, *extra_args) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\SD2\modules\sd_samplers_kdiffusion.py", line 156, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b])) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "D:\SD2\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "D:\SD2\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "D:\SD2\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "D:\SD2\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "D:\SD2\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "D:\SD2\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "D:\SD2\extensions\sd-webui-stablesr\scripts\stablesr.py", line 93, in unet_forward self.struct_cond_model.to(x.device) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\SD2\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "D:\SD2\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "D:\SD2\modules\img2img.py", line 176, in img2img processed = modules.scripts.scripts_img2img.run(p, args) File "D:\SD2\modules\scripts.py", line 441, in run processed = script.run(p, script_args) File "D:\SD2\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run result: Processed = processing.process_images(p) File "D:\SD2\modules\processing.py", line 611, in process_images res = process_images_inner(p) File "D:\SD2\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "D:\SD2\modules\processing.py", line 729, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\SD2\extensions\sd-webui-stablesr\scripts\stablesr.py", line 238, in sample_custom self.stablesr_model.struct_cond_model.to(device=first_param.device) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "D:\SD2\py310\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF