lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.8k stars 186 forks source link

[Bug]: Loading gets stuck #253

Closed pixelfixinit closed 7 months ago

pixelfixinit commented 1 year ago

Is there an existing issue for this?

What happened?

image

Steps to reproduce the problem

  1. get files via git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml.git
  2. run "webui-user.bat" and wait
  3. it gets stuck while loading

What should have happened?

it should have continued

Version or Commit where the problem happens

i couldn't find the version

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD GPUs

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

I only clicked webui-user.bat twice

List of extensions

No

Console logs

Creating venv in directory D:\AI\stable-diffusion-webui-directml\venv using python "C:\Users\myname\AppData\Local\Programs\Python\Python310\python.exe"
venv "D:\AI\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.2
Commit hash: 253a6bbfa651168dea13bb37be17e8a47c183bf2
Installing torch and torchvision
Collecting torch==2.0.0
  Using cached torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB)
Collecting torchvision==0.15.1
  Using cached torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB)
Collecting torch-directml
  Using cached torch_directml-0.2.0.dev230426-cp310-cp310-win_amd64.whl (8.2 MB)
Collecting networkx
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting typing-extensions
  Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting jinja2
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting filelock
  Using cached filelock-3.12.2-py3-none-any.whl (10 kB)
Collecting sympy
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting requests
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting numpy
  Using cached numpy-1.25.2-cp310-cp310-win_amd64.whl (15.6 MB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-10.0.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting MarkupSafe>=2.0
  Using cached MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB)
Collecting idna<4,>=2.5
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl (96 kB)
Collecting urllib3<3,>=1.21.1
  Using cached urllib3-2.0.4-py3-none-any.whl (123 kB)
Collecting certifi>=2017.4.17
  Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Collecting mpmath>=0.19
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torch-directml
Successfully installed MarkupSafe-2.1.3 certifi-2023.7.22 charset-normalizer-3.2.0 filelock-3.12.2 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 numpy-1.25.2 pillow-10.0.0 requests-2.31.0 sympy-1.12 torch-2.0.0 torch-directml-0.2.0.dev230426 torchvision-0.15.1 typing-extensions-4.7.1 urllib3-2.0.4

[notice] A new release of pip available: 22.2.1 -> 23.2.1
[notice] To update, run: D:\AI\stable-diffusion-webui-directml\venv\Scripts\python.exe -m pip install --upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into D:\AI\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai...
Cloning into 'D:\AI\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 574, done.
remote: Counting objects: 100% (304/304), done.
remote: Compressing objects: 100% (86/86), done.
remote: Total 574 (delta 244), reused 218 (delta 218), pack-reused 270
Receiving objects: 100% (574/574), 73.43 MiB | 5.22 MiB/s, done.
Resolving deltas: 100% (276/276), done.
Cloning Stable Diffusion XL into D:\AI\stable-diffusion-webui-directml\repositories\generative-models...
Cloning into 'D:\AI\stable-diffusion-webui-directml\repositories\generative-models'...
remote: Enumerating objects: 740, done.
remote: Counting objects: 100% (563/563), done.
remote: Compressing objects: 100% (283/283), done.
remote: Total 740 (delta 340), reused 425 (delta 266), pack-reused 177Receiving objects:  99% (733/740), 21.20 MiB | 2.7Receiving objects: 100% (740/740), 22.31 MiB | 3.22 MiB/s, done.

Resolving deltas: 100% (378/378), done.
Cloning K-diffusion into D:\AI\stable-diffusion-webui-directml\repositories\k-diffusion...
Cloning into 'D:\AI\stable-diffusion-webui-directml\repositories\k-diffusion'...
remote: Enumerating objects: 957, done.
remote: Counting objects: 100% (957/957), done.
remote: Compressing objects: 100% (359/359), done.
remote: Total 957 (delta 647), reused 882 (delta 591), pack-reused 0
Receiving objects: 100% (957/957), 188.40 KiB | 2.00 MiB/s, done.
Resolving deltas: 100% (647/647), done.
Cloning CodeFormer into D:\AI\stable-diffusion-webui-directml\repositories\CodeFormer...
Cloning into 'D:\AI\stable-diffusion-webui-directml\repositories\CodeFormer'...
remote: Enumerating objects: 594, done.
remote: Counting objects: 100% (245/245), done.
remote: Compressing objects: 100% (98/98), done.
remote: Total 594 (delta 176), reused 167 (delta 147), pack-reused 349
Receiving objects: 100% (594/594), 17.31 MiB | 3.93 MiB/s, done.
Resolving deltas: 100% (287/287), done.
Cloning BLIP into D:\AI\stable-diffusion-webui-directml\repositories\BLIP...
Cloning into 'D:\AI\stable-diffusion-webui-directml\repositories\BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112Receiving objects:  99% (275/277), 6.24 MiB | 2.36 MiB/s
Receiving objects: 100% (277/277), 7.03 MiB | 2.42 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements for CodeFormer
Installing requirements
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to D:\AI\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [05:42<00:00, 12.5MB/s]
Calculating sha256 for D:\AI\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 552.9s (launcher: 194.2s, import torch: 4.7s, import gradio: 1.9s, setup paths: 2.2s, other imports: 3.3s, setup codeformer: 0.1s, list SD models: 343.4s, load scripts: 1.9s, load upscalers: 0.1s, initialize extra networks: 0.1s, create ui: 0.9s, gradio launch: 0.1s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from D:\AI\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\AI\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying attention optimization: InvokeAI... done.
Model loaded in 93.9s (calculate hash: 6.7s, load weights from disk: 0.4s, create model: 0.8s, apply weights to model: 75.5s, apply half(): 4.7s, load VAE: 0.2s, move model to device: 2.7s, hijack: 0.7s, load textual inversion embeddings: 0.6s, calculate empty prompt: 1.5s).

Additional information

I removed and installed it again and again but It doesn't work. When I check task manager I see 800-1200 ram usage each time, but 0 cpu or another thing.

ohthehugemanatee commented 1 year ago

OP, it looks like it's actually running based on that output. Look at the line about 10 up from the bottom:

Calculating sha256 for D:\AI\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

It tells you the webui is running and gives you a URL