lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.79k stars 185 forks source link

[Bug]: Failed to Install on AMD GPU with Windows 10 #440

Closed KwaiBird closed 5 months ago

KwaiBird commented 5 months ago

Checklist

What happened?

I have failed to install the webui-dml v.1.9.0 from scratch.

Steps to reproduce the problem

I had made my copy of stable-diffusion-webui-directml somewhat working on the latest v1.9.0 which was git pull updated from v.1.8.0 RC (I guess), but I'm not sure how I install it. So I tried to install the latest v1.9.0 from scratch.

I was almost following the Windows section of AUTOMATIC1111's Install and Run on AMD GPUs guide, except I git clone at C:\sddw_new. During the first run of webui-user.bat, the guide says 4. If it looks like it is stuck when installing or running, press enter in the terminal and it should continue. , so I pressed enter when the command prompt shows the message below.

Traceback (most recent call last):

File "C:\sddm_new\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\sddm_new\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\sddm_new\stable-diffusion-webui-directml\modules\launch_utils.py", line 593, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Then, the installation was stopped and the command prompt window was gone.I run webui-user.bat again, but the same error message was appeared. Thus, I added --skip-torch-cuda-test to the bat like below, close the cmd.exe, and run the bat again.

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--skip-torch-cuda-test

call webui.bat

The installation continues, and the WebUI appeared. Then, of course RuntimeError: Found no NVIDIA driver on your system. was shown on the cmd window. I closed the cmd, change the COMMANDLINE_ARGS to set COMMANDLINE_ARGS=--use-directml I run the bat again and the error message was appeared.

Installing onnxruntime-directml Traceback (most recent call last): File "C:\sddw_new\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\sddw_new\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\sddw_new\stable-diffusion-webui-directml\modules\launch_utils.py", line 643, in prepare_environment run_pip("install onnxruntime-directml", "onnxruntime-directml") File "C:\sddw_new\stable-diffusion-webui-directml\modules\launch_utils.py", line 149, in run_pip return run( File "C:\sddw_new\stable-diffusion-webui-directml\modules\launch_utils.py", line 121, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install onnxruntime-directml. Command: "C:\sddw_new\stable-diffusion-webui-directml\venv\Scripts\python.exe" -m pip install onnxruntime-directml --prefer-binary Error code: 1 stdout: Collecting onnxruntime-directml Using cached onnxruntime_directml-1.17.3-cp310-cp310-win_amd64.whl (15.4 MB) Requirement already satisfied: coloredlogs in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (15.0.1) Requirement already satisfied: numpy>=1.21.6 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (1.26.2) Requirement already satisfied: flatbuffers in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (24.3.25) Requirement already satisfied: sympy in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (1.12) Requirement already satisfied: packaging in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (24.0) Requirement already satisfied: protobuf in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (3.20.3) Requirement already satisfied: humanfriendly>=9.1 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from coloredlogs->onnxruntime-directml) (10.0) Requirement already satisfied: mpmath>=0.19 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from sympy->onnxruntime-directml) (1.3.0) Requirement already satisfied: pyreadline3 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-directml) (3.4.1) Installing collected packages: onnxruntime-directml

stderr: ERROR: Could not install packages due to an OSError: [WinError 5] ANZXۂ܂B: 'C:\sddw_new\stable-diffusion-webui-directml\venv\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_shared.dll' Check the permissions. (I'm running Windows in Japanese, and the part ANZXۂ܂B is アクセスが拒否されました。 = Access denied.)

I tried to run webui-user.bat as administer or I type the command in the cmd as administer.

pip install onnxruntime-directml

, but the same error was exist.

What should have happened?

WebUI should complete installation.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-04-15-03-16.json

Console logs

venv "C:\sddw_new\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.0
Commit hash: 174310096385aa33d7b174f231650a941d73f19b
Installing clip
Installing open_clip
Cloning assets into C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-webui-assets...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 26.54 MiB/s, done.
Cloning Stable Diffusion into C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (306/306), done.
remote: Total 580 (delta 278), reused 446 (delta 247), pack-reused 9
Receiving objects: 100% (580/580), 73.44 MiB | 25.68 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into C:\sddw_new\stable-diffusion-webui-directml\repositories\generative-models...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\generative-models'...
remote: Enumerating objects: 941, done.
remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
Receiving objects: 100% (941/941), 43.85 MiB | 23.02 MiB/s, done.
Resolving deltas: 100% (490/490), done.
Cloning K-diffusion into C:\sddw_new\stable-diffusion-webui-directml\repositories\k-diffusion...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\k-diffusion'...
remote: Enumerating objects: 1340, done.
remote: Counting objects: 100% (738/738), done.
remote: Compressing objects: 100% (91/91), done.
remote: Total 1340 (delta 693), reused 654 (delta 647), pack-reused 602
Receiving objects: 100% (1340/1340), 236.03 KiB | 8.14 MiB/s, done.
Resolving deltas: 100% (941/941), done.
Cloning BLIP into C:\sddw_new\stable-diffusion-webui-directml\repositories\BLIP...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
Receiving objects: 100% (277/277), 7.03 MiB | 18.14 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [01:01<00:00, 69.4MB/s]
Calculating sha256 for C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 254.2s (prepare environment: 199.4s, initialize shared: 2.8s, other imports: 0.2s, list SD models: 64.1s, load scripts: 1.9s, create ui: 0.5s, gradio launch: 0.5s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\sddw_new\stable-diffusion-webui-directml\configs\v1-inference.yaml
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\sd_models.py", line 621, in get_sd_model
    load_model()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\sd_models.py", line 782, in load_model
    with devices.autocast(), torch.no_grad():
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\devices.py", line 234, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 787, in current_device
    _lazy_init()
  File "C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Stable diffusion model failed to load

------

venv "C:\sddw_new\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.0
Commit hash: 174310096385aa33d7b174f231650a941d73f19b
Installing clip
Installing open_clip
Cloning assets into C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-webui-assets...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 26.54 MiB/s, done.
Cloning Stable Diffusion into C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (306/306), done.
remote: Total 580 (delta 278), reused 446 (delta 247), pack-reused 9
Receiving objects: 100% (580/580), 73.44 MiB | 25.68 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into C:\sddw_new\stable-diffusion-webui-directml\repositories\generative-models...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\generative-models'...
remote: Enumerating objects: 941, done.
remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
Receiving objects: 100% (941/941), 43.85 MiB | 23.02 MiB/s, done.
Resolving deltas: 100% (490/490), done.
Cloning K-diffusion into C:\sddw_new\stable-diffusion-webui-directml\repositories\k-diffusion...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\k-diffusion'...
remote: Enumerating objects: 1340, done.
remote: Counting objects: 100% (738/738), done.
remote: Compressing objects: 100% (91/91), done.
remote: Total 1340 (delta 693), reused 654 (delta 647), pack-reused 602
Receiving objects: 100% (1340/1340), 236.03 KiB | 8.14 MiB/s, done.
Resolving deltas: 100% (941/941), done.
Cloning BLIP into C:\sddw_new\stable-diffusion-webui-directml\repositories\BLIP...
Cloning into 'C:\sddw_new\stable-diffusion-webui-directml\repositories\BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
Receiving objects: 100% (277/277), 7.03 MiB | 18.14 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [01:01<00:00, 69.4MB/s]
Calculating sha256 for C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 254.2s (prepare environment: 199.4s, initialize shared: 2.8s, other imports: 0.2s, list SD models: 64.1s, load scripts: 1.9s, create ui: 0.5s, gradio launch: 0.5s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\sddw_new\stable-diffusion-webui-directml\configs\v1-inference.yaml
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\sd_models.py", line 621, in get_sd_model
    load_model()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\sd_models.py", line 782, in load_model
    with devices.autocast(), torch.no_grad():
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\devices.py", line 234, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 787, in current_device
    _lazy_init()
  File "C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Stable diffusion model failed to load

------

(venv) C:\sddw_new\stable-diffusion-webui-directml>pip install onnxruntime-directml
Collecting onnxruntime-directml
  Using cached onnxruntime_directml-1.17.3-cp310-cp310-win_amd64.whl (15.4 MB)
Requirement already satisfied: packaging in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (24.0)
Requirement already satisfied: protobuf in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (3.20.3)
Requirement already satisfied: sympy in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (1.12)
Requirement already satisfied: numpy>=1.21.6 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (1.26.2)
Requirement already satisfied: flatbuffers in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (24.3.25)
Requirement already satisfied: coloredlogs in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime-directml) (15.0.1)
Requirement already satisfied: humanfriendly>=9.1 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from coloredlogs->onnxruntime-directml) (10.0)
Requirement already satisfied: mpmath>=0.19 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from sympy->onnxruntime-directml) (1.3.0)
Requirement already satisfied: pyreadline3 in c:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-directml) (3.4.1)
Installing collected packages: onnxruntime-directml
ERROR: Could not install packages due to an OSError: [WinError 5] アクセスが拒否されました。: 'C:\\sddw_new\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll'
Check the permissions.

Additional information

Windows 10 Home 22H2 CPU: AMD Ryzen 9 5900X GPU: AMD Radeon RX 7900 GRE (driver: 24.3.1) RAM: DDR4 DIMM 3200MHz 16GB 2x

lshqqytiger commented 5 months ago

Uninstall onnxruntime-directml first.

.\venv\Scripts\activate
pip uninstall onnxruntime-directml -y

then try again without reinstalling it. If it still doesn't work, add --skip-ort

KwaiBird commented 5 months ago

Uninstall onnxruntime-directml first.

.\venv\Scripts\activate
pip uninstall onnxruntime-directml -y

then try again without reinstalling it. If it still doesn't work, add --skip-ort

Thank you. I did the steps, and did some commands by myself. Then, the WebUI had generated an image with default settings even after COMMANDLINE_ARGS was changed to only have '--use-directml'. I close the issue.


Here is what I did. I was following the steps, but it still had the error like below.

venv "C:\sddw_new\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.0
Commit hash: 174310096385aa33d7b174f231650a941d73f19b
Skipping onnxruntime installation.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-directml --skip-ort
DirectML initialization failed: No module named 'torch_directml'
Traceback (most recent call last):
  File "C:\sddw_new\stable-diffusion-webui-directml\launch.py", line 48, in <module>
    main()
  File "C:\sddw_new\stable-diffusion-webui-directml\launch.py", line 44, in main
    start()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\launch_utils.py", line 696, in start
    import webui
  File "C:\sddw_new\stable-diffusion-webui-directml\webui.py", line 13, in <module>
    initialize.imports()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\initialize.py", line 36, in imports
    shared_init.initialize()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\shared_init.py", line 30, in initialize
    directml_do_hijack()
  File "C:\sddw_new\stable-diffusion-webui-directml\modules\dml\__init__.py", line 76, in directml_do_hijack
    if not torch.dml.has_float64_support(device):
  File "C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\torch\__init__.py", line 1938, in __getattr__
    raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
AttributeError: module 'torch' has no attribute 'dml'
続行するには何かキーを押してください . . .

Thus, I did those commands.

.\venv\Scripts\activate
pip uninstall torch_directml -y
pip install torch_directml

Finally, I had reached the webUI on my FIrefox browser, and successed to generate an image with the default settings.

venv "C:\sddw_new\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.0
Commit hash: 174310096385aa33d7b174f231650a941d73f19b
Skipping onnxruntime installation.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\sddw_new\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-directml --skip-ort
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
Loading weights [6ce0161689] from C:\sddw_new\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\sddw_new\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.3s (prepare environment: 18.5s, initialize shared: 2.0s, other imports: 0.3s, load scripts: 2.5s, create ui: 0.7s, gradio launch: 0.4s).
Applying attention optimization: InvokeAI... done.
Model loaded in 3.8s (load weights from disk: 0.2s, create model: 0.5s, apply weights to model: 2.6s, apply half(): 0.1s, calculate empty prompt: 0.4s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00,  1.11s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:21<00:00,  1.06s/it]
T

Then, my current COMMANDLINE_ARGS is set only '--use-directml' now.