Interpause / auto-sd-paint-ext

Extension for AUTOMATIC1111 to add custom backend API for Krita Plugin & more
MIT License
475 stars 41 forks source link

Cuda of of Memory when inpainting or img2img #120

Closed laughinggaschambers closed 1 year ago

laughinggaschambers commented 1 year ago

When inpainting and img2img the it'll generate up to 5/-- and error. The status on the docker would flicker between network error and can't reach backend. It'll keep generating and then printing error but eventually it sometime completes if it's small enough, sometimes 90% of the generation and it only shows it in the preview and not be on a layer.

It works fine through the normal gradio ui no issue, only prints the errors when doing it through krita. This was done on the latest version of everything today. I reinstalled the extension a bunch of times, git pull the latest build. Updated cuda kit to the lastest. Tried it with and without xformers, reinstalled xformers/torch. Krita ui setting is pretty much on default other than color correction. It I had a slightly different error log when I had the dynamic prompts extension, so I removed it but still the same CUDA memory error.


auto-sd-paint-ext:INFO: img2img:
{'restore_faces': False, 'face_restorer': 'None', 'codeformer_weight': 0.5, 'inpainting_fill': 1, 'inpaint_full_res': False, 'inpaint_full_res_padding': 0, 'mask_blur': 0, 'invert_mask': False, 'inpaint_mask_weight': 1.0, 'sd_model': 'RandoMix3_0.8-NAIfinal-pruned_0.8-SDv1-5-pruned-emaonly_0.2-Weighted_sum-merged_0.2-Weighted_sum-merged.ckpt [fbeae043ff]', 'script': 'None', 'script_args': [], 'prompt': 'hand holding a torch', 'negative_prompt': 'broom, photograph, frame, 3d, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name', 'seed': -1, 'seed_enable_extras': False, 'subseed': -1, 'subseed_strength': 0.0, 'seed_resize_from_h': 0, 'seed_resize_from_w': 0, 'sampler_name': 'DDIM', 'steps': 40, 'cfg_scale': 5.0, 'denoising_strength': 0.5600000000000002, 'batch_count': 1, 'batch_size': 1, 'base_size': 512, 'max_size': 768, 'disable_sddebz_highres': False, 'tiling': False, 'highres_fix': False, 'firstphase_height': 512, 'firstphase_width': 512, 'upscaler_name': 'None', 'filter_nsfw': False, 'include_grid': False, 'sample_path': 'outputs/krita-out', 'save_samples': False, 'is_inpaint': True, 'resize_mode': 1, 'color_correct': True, 'do_exact_steps': True}
INFO:auto-sd-paint-ext:img2img:
{'restore_faces': False, 'face_restorer': 'None', 'codeformer_weight': 0.5, 'inpainting_fill': 1, 'inpaint_full_res': False, 'inpaint_full_res_padding': 0, 'mask_blur': 0, 'invert_mask': False, 'inpaint_mask_weight': 1.0, 'sd_model': 'RandoMix3_0.8-NAIfinal-pruned_0.8-SDv1-5-pruned-emaonly_0.2-Weighted_sum-merged_0.2-Weighted_sum-merged.ckpt [fbeae043ff]', 'script': 'None', 'script_args': [], 'prompt': 'hand holding a torch', 'negative_prompt': 'broom, photograph, frame, 3d, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name', 'seed': -1, 'seed_enable_extras': False, 'subseed': -1, 'subseed_strength': 0.0, 'seed_resize_from_h': 0, 'seed_resize_from_w': 0, 'sampler_name': 'DDIM', 'steps': 40, 'cfg_scale': 5.0, 'denoising_strength': 0.5600000000000002, 'batch_count': 1, 'batch_size': 1, 'base_size': 512, 'max_size': 768, 'disable_sddebz_highres': False, 'tiling': False, 'highres_fix': False, 'firstphase_height': 512, 'firstphase_width': 512, 'upscaler_name': 'None', 'filter_nsfw': False, 'include_grid': False, 'sample_path': 'outputs/krita-out', 'save_samples': False, 'is_inpaint': True, 'resize_mode': 1, 'color_correct': True, 'do_exact_steps': True}
auto-sd-paint-ext:INFO: img size: 508x635 -> 512x640, aspect ratio: 0.80 -> 0.80, 0.00% change
INFO:auto-sd-paint-ext:img size: 508x635 -> 512x640, aspect ratio: 0.80 -> 0.80, 0.00% change
Running DDIM Sampling with 39 timesteps
Decoding image:  13%|████████▌                                                          | 5/39 [00:04<00:29,  1.16it/s]ERROR:    Exception in ASGI application                                                  | 5/40 [00:02<00:20,  1.75it/s]
Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in __call__
    await super().__call__(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in __call__
    await self.middleware_stack(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__
    response = await self.dispatch_func(request, call_next)
  File "D:\Desktop\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 395, in app_encryption_middleware
    res: StreamingResponse = await call_next(req)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__
    response = await self.dispatch_func(request, call_next)
  File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
    res: Response = await call_next(req)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in __call__
    await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function    return await run_in_threadpool(dependant.call, **values)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 319, in progressapi
    shared.state.set_current_image()
  File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image
    self.do_set_current_image()
  File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image
    self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in <listcomp>
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 37, in single_sample_to_image
    x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
  File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 423, in decode_first_stage
    x = model.decode_first_stage(x)
  File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
    dec = self.decoder(z)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 641, in forward
    h = self.up[i_level].upsample(h)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 64, in forward
    x = self.conv(x)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
    return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 8.00 GiB total capacity; 7.01 GiB already allocated; 0 bytes free; 7.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in __call__
    await super().__call__(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in __call__
    await self.middleware_stack(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__
    response = await self.dispatch_func(request, call_next)
  File "D:\Desktop\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 395, in app_encryption_middleware
    res: StreamingResponse = await call_next(req)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__
    response = await self.dispatch_func(request, call_next)
  File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
    res: Response = await call_next(req)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in __call__
    await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function    return await run_in_threadpool(dependant.call, **values)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 319, in progressapi
    shared.state.set_current_image()
  File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image
    self.do_set_current_image()
  File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image
    self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in <listcomp>
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 37, in single_sample_to_image
    x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
  File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 423, in decode_first_stage
    x = model.decode_first_stage(x)
  File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
    dec = self.decoder(z)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 637, in forward
    h = self.up[i_level].block[i_block](h, temb)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 131, in forward
    h = self.norm1(h)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
    return F.group_norm(
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 8.00 GiB total capacity; 6.96 GiB already allocated; 0 bytes free; 7.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Decoding image:  13%|████████▌                                                          | 5/39 [00:06<00:41,  1.22s/it]
Error completing request
Arguments: ('', 4, 'hand holding a torch', 'broom, photograph, frame, 3d, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name', 'None', <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=508x635 at 0x20D74636B00>, None, None, None, None, <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=508x635 at 0x20D74636B00>, <PIL.Image.Image image mode=L size=508x635 at 0x20D743504F0>, 40, 14, 0, None, 1, False, False, 1, 1, 5.0, 0, 0.5600000000000002, -1, -1, 0.0, 0, 0, False, 640, 512, 1, False, 0, False, '', '', '', [], 0) {}
Traceback (most recent call last):
  File "D:\Desktop\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\Desktop\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
    processed = process_images(p)
  File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 628, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 1044, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 139, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.sampler.decode(x1, conditioning, t_enc, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning))
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 43, in launch_sampling
    return func()
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 139, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.sampler.decode(x1, conditioning, t_enc, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning))
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 332, in decode
    x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
  File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 87, in p_sample_ddim_hook
    res = self.orig_p_sample_ddim(x_dec, cond, ts, unconditional_conditioning=unconditional_conditioning, *args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 211, in p_sample_ddim
    model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
  File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
    h = module(h, emb, context)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward
    x = block(x, context=context[i])
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 259, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 129, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 264, in _forward
    x = self.ff(self.norm3(x)) + x
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 73, in forward
    return self.net(x)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 52, in forward
    x, gate = self.proj(x).chunk(2, dim=-1)
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Desktop\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 178, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 8.00 GiB total capacity; 6.95 GiB already allocated; 0 bytes free; 7.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

auto-sd-paint-ext:WARNING: Interrupted!
WARNING:auto-sd-paint-ext:Interrupted!

**Desktop (please complete the following information):**
 - OS: Windows 10
 - WebUI commit revision 34a4c152f2e82053eb67a3ea4ed1dd6e5e2919b1
 - Extension commit revision 516f70cbc61aa5085ed9fe5e7818fa2976e01fd8
Interpause commented 1 year ago

Under Common Options there is base size & max size. What the plugin does different from the webUI is it will change the requested resolution for the generated image based on them. There is a toggle to disable it, which will allow using the exact resolution of the canvas/selection for image generation.

laughinggaschambers commented 1 year ago

I tried to generate with it toggled on, it still has the same error log, just that this time created a layer instead of being interrupted. So no issues for now. Idk if the error should be concerning or not.

Nvm, it seems that doing a selection size bigger from ~200x200 to ~500x500 it gets more and more error prone and then it interrupts it and doesn't create a layer.

laughinggaschambers commented 1 year ago

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next message = await recv_stream.receive() File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in call await super().call(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in call await self.middleware_stack(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call response = await self.dispatch_func(request, call_next) File "D:\Desktop\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 395, in app_encryption_middleware res: StreamingResponse = await call_next(req) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call response = await self.dispatch_func(request, call_next) File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time res: Response = await call_next(req) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in call await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call await route.handle(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app raw_response = await run_endpoint_function( File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function return await run_in_threadpool(dependant.call, values) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, args) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 319, in progressapi shared.state.set_current_image() File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image self.do_set_current_image() File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent)) File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples]) File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples]) File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 37, in single_sample_to_image x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0] File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 423, in decode_first_stage x = model.decode_first_stage(x) File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, kwargs) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage return self.first_stage_model.decode(z) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode dec = self.decoder(z) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 637, in forward h = self.up[i_level].block[i_block](h, temb) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 131, in forward h = self.norm1(h) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, **kwargs) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 8.00 GiB total capacity; 6.56 GiB already allocated; 0 bytes free; 7.07 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Decoding image: 100%|██████████████████████████████████████████████████████████████████| 39/39 [00:04<00:00, 8.94it/s] ERROR: Exception in ASGI application██████████████████████████████████████████████▎ | 39/40 [00:03<00:00, 12.99it/s] Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive return self.receive_nowait() File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait raise WouldBlock anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next message = await recv_stream.receive() File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in call await super().call(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in call await self.middleware_stack(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call response = await self.dispatch_func(request, call_next) File "D:\Desktop\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 395, in app_encryption_middleware res: StreamingResponse = await call_next(req) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call response = await self.dispatch_func(request, call_next) File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time res: Response = await call_next(req) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in call await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call await route.handle(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app raw_response = await run_endpoint_function( File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function return await run_in_threadpool(dependant.call, values) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, args) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 319, in progressapi shared.state.set_current_image() File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image self.do_set_current_image() File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent)) File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples]) File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples]) File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 37, in single_sample_to_image x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0] File "D:\Desktop\stable-diffusion-webui\modules\processing.py", line 423, in decode_first_stage x = model.decode_first_stage(x) File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "D:\Desktop\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, kwargs) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage return self.first_stage_model.decode(z) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode dec = self.decoder(z) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 637, in forward h = self.up[i_level].block[i_block](h, temb) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "D:\Desktop\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 131, in forward h = self.norm1(h) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, **kwargs) File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 8.00 GiB total capacity; 6.64 GiB already allocated; 0 bytes free; 7.07 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Total progress: 98%|████████████████████████████████████████████████████████████████▎ | 39/40 [00:05<00:00, 7.69it/s] auto-sd-paint-ext:INFO: img Size: 360x408, target: 360x405 INFO:auto-sd-paint-ext:img Size: 360x408, target: 360x405 auto-sd-paint-ext:INFO: output sizes: [312228] INFO:auto-sd-paint-ext:output sizes: [312228] auto-sd-paint-ext:INFO: finished img2img! INFO:auto-sd-paint-ext:finished img2img! auto-sd-paint-ext:INFO: img2img: {'restore_faces': False, 'face_restorer': 'None', 'codeformer_weight': 0.5, 'inpainting_fill': 1, 'inpaint_full_res': False, 'inpaint_full_res_padding': 0, 'mask_blur': 0, 'invert_mask': False, 'inpaint_mask_weight': 1.0, 'sd_model': 'RandoMix3_0.8-NAIfinal-pruned_0.8-SDv1-5-pruned-emaonly_0.2-Weighted_sum-merged_0.2-Weighted_sum-merged.ckpt [fbeae043ff]', 'script': 'None', 'script_args': [], 'prompt': 'hand holding a torch', 'negative_prompt': 'broom, photograph, frame, 3d, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name', 'seed': -1, 'seed_enable_extras': False, 'subseed': -1, 'subseed_strength': 0.0, 'seed_resize_from_h': 0, 'seed_resize_from_w': 0, 'sampler_name': 'DDIM', 'steps': 40, 'cfg_scale': 5.0, 'denoising_strength': 0.5600000000000002, 'batch_count': 1, 'batch_size': 1, 'base_size': 512, 'max_size': 768, 'disable_sddebz_highres': True, 'tiling': False, 'highres_fix': False, 'firstphase_height': 512, 'firstphase_width': 512, 'upscaler_name': 'None', 'filter_nsfw': False, 'include_grid': False, 'sample_path': 'outputs/krita-out', 'save_samples': False, 'is_inpaint': True, 'resize_mode': 1, 'color_correct': True, 'do_exact_steps': True} INFO:auto-sd-paint-ext:img2img: {'restore_faces': False, 'face_restorer': 'None', 'codeformer_weight': 0.5, 'inpainting_fill': 1, 'inpaint_full_res': False, 'inpaint_full_res_padding': 0, 'mask_blur': 0, 'invert_mask': False, 'inpaint_mask_weight': 1.0, 'sd_model': 'RandoMix3_0.8-NAIfinal-pruned_0.8-SDv1-5-pruned-emaonly_0.2-Weighted_sum-merged_0.2-Weighted_sum-merged.ckpt [fbeae043ff]', 'script': 'None', 'script_args': [], 'prompt': 'hand holding a torch', 'negative_prompt': 'broom, photograph, frame, 3d, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name', 'seed': -1, 'seed_enable_extras': False, 'subseed': -1, 'subseed_strength': 0.0, 'seed_resize_from_h': 0, 'seed_resize_from_w': 0, 'sampler_name': 'DDIM', 'steps': 40, 'cfg_scale': 5.0, 'denoising_strength': 0.5600000000000002, 'batch_count': 1, 'batch_size': 1, 'base_size': 512, 'max_size': 768, 'disable_sddebz_highres': True, 'tiling': False, 'highres_fix': False, 'firstphase_height': 512, 'firstphase_width': 512, 'upscaler_name': 'None', 'filter_nsfw': False, 'include_grid': False, 'sample_path': 'outputs/krita-out', 'save_samples': False, 'is_inpaint': True, 'resize_mode': 1, 'color_correct': True, 'do_exact_steps': True} auto-sd-paint-ext:INFO: img size: 162x108 -> 168x112, aspect ratio: 1.50 -> 1.50, 0.00% change INFO:auto-sd-paint-ext:img size: 162x108 -> 168x112, aspect ratio: 1.50 -> 1.50, 0.00% change Running DDIM Sampling with 39 timesteps Decoding image: 100%|██████████████████████████████████████████████████████████████████| 39/39 [00:03<00:00, 12.60it/s] Total progress: 98%|████████████████████████████████████████████████████████████████▎ | 39/40 [00:02<00:00, 13.98it/s] auto-sd-paint-ext:INFO: img Size: 168x112, target: 162x108 INFO:auto-sd-paint-ext:img Size: 168x112, target: 162x108 auto-sd-paint-ext:INFO: output sizes: [50484] INFO:auto-sd-paint-ext:output sizes: [50484] auto-sd-paint-ext:INFO: finished img2img! INFO:auto-sd-paint-ext:finished img2img!

JasonS09 commented 1 year ago

I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs.

When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.

Interpause commented 1 year ago

I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs.

When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.

Allocating more than a GB of memory should be expected for any image generation. Did you enable A1111's low vram usage features like --lowvram?

Interpause commented 1 year ago

I tried to generate with it toggled on, it still has the same error log, just that this time created a layer instead of being interrupted. So no issues for now. Idk if the error should be concerning or not.

Nvm, it seems that doing a selection size bigger from ~200x200 to ~500x500 it gets more and more error prone and then it interrupts it and doesn't create a layer.

To confirm, this works as expected when using the webUI?

JasonS09 commented 1 year ago

I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs. When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.

Allocating more than a GB of memory should be expected for any image generation. Did you enable A1111's low vram usage features like --lowvram?

I managed to make it work by disabling the live preview in the web UI, that was the issue for me.

JohnDoeAntler commented 1 year ago

I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs. When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.

Allocating more than a GB of memory should be expected for any image generation. Did you enable A1111's low vram usage features like --lowvram?

I managed to make it work by disabling the live preview in the web UI, that was the issue for me.

Thanks, that fixes the issue for me.