Closed laughinggaschambers closed 1 year ago
Under Common Options there is base size & max size. What the plugin does different from the webUI is it will change the requested resolution for the generated image based on them. There is a toggle to disable it, which will allow using the exact resolution of the canvas/selection for image generation.
I tried to generate with it toggled on, it still has the same error log, just that this time created a layer instead of being interrupted. So no issues for now. Idk if the error should be concerning or not.
Nvm, it seems that doing a selection size bigger from ~200x200 to ~500x500 it gets more and more error prone and then it interrupts it and doesn't create a layer.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next message = await recv_stream.receive() File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in call
await super().call(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in call
await self.middleware_stack(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call
response = await self.dispatch_func(request, call_next)
File "D:\Desktop\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 395, in app_encryption_middleware
res: StreamingResponse = await call_next(req)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call
response = await self.dispatch_func(request, call_next)
File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in call
await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, values)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, args)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 319, in progressapi
shared.state.set_current_image()
File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image
self.do_set_current_image()
File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image
self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next message = await recv_stream.receive() File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in call
await super().call(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in call
await self.middleware_stack(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call
response = await self.dispatch_func(request, call_next)
File "D:\Desktop\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 395, in app_encryption_middleware
res: StreamingResponse = await call_next(req)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call
response = await self.dispatch_func(request, call_next)
File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in call
await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, values)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, args)
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
File "D:\Desktop\stable-diffusion-webui\modules\api\api.py", line 319, in progressapi
shared.state.set_current_image()
File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image
self.do_set_current_image()
File "D:\Desktop\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image
self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "D:\Desktop\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in
I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs.
When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.
I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs.
When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.
Allocating more than a GB of memory should be expected for any image generation. Did you enable A1111's low vram usage features like --lowvram
?
I tried to generate with it toggled on, it still has the same error log, just that this time created a layer instead of being interrupted. So no issues for now. Idk if the error should be concerning or not.
Nvm, it seems that doing a selection size bigger from ~200x200 to ~500x500 it gets more and more error prone and then it interrupts it and doesn't create a layer.
To confirm, this works as expected when using the webUI?
I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs. When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.
Allocating more than a GB of memory should be expected for any image generation. Did you enable A1111's low vram usage features like
--lowvram
?
I managed to make it work by disabling the live preview in the web UI, that was the issue for me.
I'm having the same issue. When inpainting, it tries to allocate a huge amount of memory (1gbs+). I'm not using restore faces, and have disabled base/max size. File is small also (288 x 512). I'm working with a remote server in Google Docs. When the CUDA error doesn't happen, it takes 40+ minutes to finish the process.
Allocating more than a GB of memory should be expected for any image generation. Did you enable A1111's low vram usage features like
--lowvram
?I managed to make it work by disabling the live preview in the web UI, that was the issue for me.
Thanks, that fixes the issue for me.
When inpainting and img2img the it'll generate up to 5/-- and error. The status on the docker would flicker between network error and can't reach backend. It'll keep generating and then printing error but eventually it sometime completes if it's small enough, sometimes 90% of the generation and it only shows it in the preview and not be on a layer.
It works fine through the normal gradio ui no issue, only prints the errors when doing it through krita. This was done on the latest version of everything today. I reinstalled the extension a bunch of times, git pull the latest build. Updated cuda kit to the lastest. Tried it with and without xformers, reinstalled xformers/torch. Krita ui setting is pretty much on default other than color correction. It I had a slightly different error log when I had the dynamic prompts extension, so I removed it but still the same CUDA memory error.