Open Aypixl opened 1 year ago
I confirm the same error on my AMD 6700xt. I get this error with img2img.
I got this error too,when I am using openoutpaint extension. I am an AMD user with only 2GB of VRAM. But I saw something in console:
D:\novel AI\stable-diffusion-webui-directml\modules\processing.py:331: UserWarning: The operator 'aten::lerp.Scalar_out' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) conditioning_image = torch.lerp( *** API error: POST: http://127.0.0.1:7860/sdapi/v1/img2img {'error': 'RuntimeError', 'detail': '', 'body': '', 'errors': '"lerp_kernel_scalar" not implemented for \'Half\''}
I think it's because that my videocard can use half precision,but my cpu can't. But before I update the code,I didn't get this error. sysinfo-2023-10-05-04-18.txt
Today I updated the latest code,the bug is still there.
*** API error: POST: http://127.0.0.1:7860/sdapi/v1/img2img {'error': 'RuntimeError', 'detail': '', 'body': '', 'errors': '"lerp_kernel_scalar" not implemented for \'Half\''} Traceback (most recent call last): File "F:\venv\lib\site-packages\anyio\streams\memory.py", line 98, in receive return self.receive_nowait() File "F:\venv\lib\site-packages\anyio\streams\memory.py", line 93, in receive_nowait raise WouldBlock anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\venv\lib\site-packages\starlette\middleware\base.py", line 78, in call_next
message = await recv_stream.receive()
File "F:\venv\lib\site-packages\anyio\streams\memory.py", line 118, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\stable-diffusion-webui-directml\modules\api\api.py", line 186, in exception_handling
return await call_next(request)
File "F:\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "F:\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "F:\venv\lib\site-packages\starlette\middleware\base.py", line 108, in __call__
response = await self.dispatch_func(request, call_next)
File "F:\stable-diffusion-webui-directml\modules\api\api.py", line 150, in log_and_time
res: Response = await call_next(req)
File "F:\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "F:\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "F:\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "F:\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
await self.app(scope, receive, send)
File "F:\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "F:\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "F:\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "F:\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "F:\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "F:\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "F:\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "F:\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "F:\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "F:\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "F:\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "F:\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "F:\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "F:\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "F:\stable-diffusion-webui-directml\modules\api\api.py", line 545, in img2imgapi
processed = process_images(p)
File "F:\stable-diffusion-webui-directml\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "F:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "F:\stable-diffusion-webui-directml\modules\processing.py", line 1009, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "F:\stable-diffusion-webui-directml\modules\processing.py", line 1826, in init
self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_mask, self.mask_round)
File "F:\stable-diffusion-webui-directml\modules\processing.py", line 389, in img2img_image_conditioning
return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask, round_image_mask=round_image_mask)
File "F:\stable-diffusion-webui-directml\modules\processing.py", line 360, in inpainting_image_conditioning
conditioning_image = torch.lerp(
RuntimeError: "lerp_kernel_scalar" not implemented for 'Half'
Is there an existing issue for this?
What happened?
Upon attempting to render using inpaint, i get a runtime error "lerp_kernel_scalar" not implemented for 'Half
Steps to reproduce the problem
First inpaint attempt after fresh install with URPM safetensors
What should have happened?
the image should be rendered
Version or Commit where the problem happens
n/a
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
AMD GPUs (RX 6000 above)
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
no
Console logs
Additional information
No response