lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 175 forks source link

[Bug]: It insists on using the CPU regardless of any args. #367

Closed n-berenice closed 4 months ago

n-berenice commented 4 months ago

Checklist

What happened?

First of all I'm running stock, barebones, no extensions, and only with the included model, clean install. It still happens when using other models though (like chilloutmix). My args of choice are --skip-torch-cuda-test --no-half --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --precision full --always-batch-cond-uncond --disable-nan-check --medvram and I'm running a 5700XT. Tried both 22.Q4 and the latest drivers, no changes. Windows 10.

Steps to reproduce the problem

  1. Open the webui
  2. Write a prompt
  3. Click generate

What should have happened?

Use the graphics card instead of the CPU

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

Internal Server Error

Console logs

venv "A:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25
Launching Web UI with arguments: --skip-torch-cuda-test --no-half --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --precision full --always-batch-cond-uncond --disable-nan-check --medvram
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Style database not found: A:\stable-diffusion-webui-directml\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Calculating sha256 for A:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 10.2s (prepare environment: 0.3s, import torch: 3.9s, import gradio: 1.2s, setup paths: 1.1s, initialize shared: 0.2s, other imports: 0.9s, setup codeformer: 0.1s, list SD models: 0.2s, load scripts: 1.4s, create ui: 0.4s, gradio launch: 0.3s).
fc2511737a54c5e80b89ab03e0ab4b98d051ab187f92860f3cd664dc9d08b271
Loading weights [fc2511737a] from A:\stable-diffusion-webui-directml\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors
Creating model from config: A:\stable-diffusion-webui-directml\configs\v1-inference.yaml
Applying attention optimization: sub-quadratic... done.
Model loaded in 13.8s (calculate hash: 10.8s, load weights from disk: 0.2s, create model: 0.6s, apply weights to model: 2.0s, calculate empty prompt: 0.1s).
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__
    await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "A:\stable-diffusion-webui-directml\modules\ui.py", line 2469, in download_sysinfo
    text = sysinfo.get()
  File "A:\stable-diffusion-webui-directml\modules\sysinfo.py", line 49, in get
    res = get_dict()
  File "A:\stable-diffusion-webui-directml\modules\sysinfo.py", line 75, in get_dict
    gpu = DeviceProperties(devices.device)
  File "A:\stable-diffusion-webui-directml\modules\dml\device_properties.py", line 12, in __init__
    self.name = torch.dml.get_device_name(device)
AttributeError: module 'torch' has no attribute 'dml'
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__
    await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "A:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "A:\stable-diffusion-webui-directml\modules\ui.py", line 2482, in <lambda>
    lambda: download_sysinfo(attachment=True),
  File "A:\stable-diffusion-webui-directml\modules\ui.py", line 2469, in download_sysinfo
    text = sysinfo.get()
  File "A:\stable-diffusion-webui-directml\modules\sysinfo.py", line 49, in get
    res = get_dict()
  File "A:\stable-diffusion-webui-directml\modules\sysinfo.py", line 75, in get_dict
    gpu = DeviceProperties(devices.device)
  File "A:\stable-diffusion-webui-directml\modules\dml\device_properties.py", line 12, in __init__
    self.name = torch.dml.get_device_name(device)
AttributeError: module 'torch' has no attribute 'dml'

Additional information

sysinfo not available due to unknown shenanigans 8700K CPU RX 5700XT GPU 32GB memory

lshqqytiger commented 4 months ago

Try --use-directml --no-half --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --precision full --always-batch-cond-uncond --disable-nan-check --medvram

n-berenice commented 4 months ago

Try --use-directml --no-half --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --precision full --always-batch-cond-uncond --disable-nan-check --medvram

Thanks, that worked. I had an error with chilloutmix but it apparently fixed itself after restarting the computer.