lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.8k stars 186 forks source link

[Bug]: It always find NVDIA driver #152

Closed wzh531944865 closed 1 year ago

wzh531944865 commented 1 year ago

Is there an existing issue for this?

What happened?

venv "D:\AMD-SD\stable-diffusion-webui-directml\venv\Scripts\Python.exe" NVIDIA driver was found. Automatically changed backend to 'cuda'. You can manually select which backend will be used through '--backend' argument. fatal: No names found, cannot describe anything.

Steps to reproduce the problem

I have a GTX 970 before , but today I change it to 7900xtx ,so I need to use directml to use stable diffusion on Windows 10,but when I open webui-user.bat ,It always find NVDIA driver and try to user cuda

What should have happened?

not cuda

Commit where the problem happens

D:\AMD-SD\stablediffusion-directml>git log commit d4c168b2ad29d82e5fdfea4d598075f40a3b0341 (HEAD -> main, origin/main, origin/HEAD) Merge: 890e307 cf1d67a Author: Seunghoon Lee lshqqytiger@naver.com Date: Wed Mar 29 23:08:10 2023 +0900 Merge branch 'Stability-AI-main'

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD GPUs (RX 6000 above)

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

export COMMANDLINE_ARGS="--precision full --no-half"

List of extensions

NO

Console logs

venv "D:\AMD-SD\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
NVIDIA driver was found. Automatically changed backend to 'cuda'. You can manually select which backend will be used through '--backend' argument.
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: <none>
Commit hash: ebf229bd1727a0f8f0d149829ce82e2012ba7318
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from D:\AMD-SD\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\AMD-SD\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.6s (import torch: 1.6s, import gradio: 0.9s, import ldm: 0.4s, other imports: 10.3s, load scripts: 0.8s, create ui: 0.3s, gradio launch: 0.1s).
DiffusionWrapper has 859.52 M params.
Applying optimization: InvokeAI... done.
Textual inversion embeddings loaded(0):
Model loaded in 1.7s (load weights from disk: 0.4s, create model: 0.3s, apply weights to model: 0.5s, apply half(): 0.5s).

Additional information

No response

lshqqytiger commented 1 year ago

Simple solution: add --backend directml on commandline arguments. You should remove the driver of previous NVIDIA GPU to prevent its auto detection.

wzh531944865 commented 1 year ago

Simple solution: add --backend directml on commandline arguments. You should remove the driver of previous NVIDIA GPU to prevent its auto detection.

Thanks , it works