vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.68k stars 421 forks source link

[Issue]: Always using CPU, can't switch to GPU #3001

Closed mr-september closed 6 months ago

mr-september commented 7 months ago

Issue Description

I have both SD.Next and A1111-DirectML fork. On A1111 I can run on GPU no problem. On SD.Next, I cannot figure out how to switch to GPU.

I have tried --reinstall, manually deleting and re-generating the venv folder, launch arguments --use-directml and --use-zluda. None of them work.

I am running both AMD CPU and GPU (5700), with latest GPU drivers 24.3.1

How can I switch to GPU?

Version Platform Description

18:21:07-234553 INFO     Starting SD.Next
18:21:07-239250 INFO     Logger: file="E:\SD_Next\sdnext.log" level=DEBUG size=65 mode=create
18:21:07-241919 INFO     Python 3.10.11 on Windows
18:21:07-391372 INFO     Version: app=sd.next updated=2024-03-21 hash=82973c49 branch=master
                         url=https://github.com/vladmandic/automatic.git/tree/master
18:21:08-195038 INFO     Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.11

Relevant log output

Using VENV: E:\SD_Next\venv
18:21:07-234553 INFO     Starting SD.Next
18:21:07-239250 INFO     Logger: file="E:\SD_Next\sdnext.log" level=DEBUG size=65 mode=create
18:21:07-241919 INFO     Python 3.10.11 on Windows
18:21:07-391372 INFO     Version: app=sd.next updated=2024-03-21 hash=82973c49 branch=master
                         url=https://github.com/vladmandic/automatic.git/tree/master
18:21:08-195038 INFO     Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.11
18:21:08-201038 DEBUG    Setting environment tuning
18:21:08-203234 DEBUG    HF cache folder: C:\Users\Larry\.cache\huggingface\hub
18:21:08-204236 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
18:21:08-208235 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
18:21:08-218236 DEBUG    Package not found: torch-directml
18:21:08-230234 WARNING  ZLUDA failed to initialize: no HIP SDK found
18:21:08-232234 INFO     Using CPU-only Torch
18:21:08-233234 DEBUG    Installing torch: torch torchvision
18:21:08-320266 WARNING  Modified files: ['repositories/BLIP/BLIP.gif', 'repositories/CodeFormer/.gitignore']
18:21:08-368263 DEBUG    Repository update time: Thu Mar 21 13:23:50 2024
18:21:08-370264 INFO     Startup: standard
18:21:08-372266 INFO     Verifying requirements
18:21:08-383264 INFO     Verifying packages
18:21:08-385263 INFO     Verifying submodules
18:21:10-889061 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
18:21:10-965893 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
18:21:11-037940 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
18:21:11-122988 DEBUG    Submodule: extensions-builtin/sd-webui-controlnet / main
18:21:11-284096 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
18:21:11-357481 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
18:21:11-440999 DEBUG    Submodule: modules/k-diffusion / master
18:21:11-536519 DEBUG    Submodule: wiki / master
18:21:11-603922 DEBUG    Register paths
18:21:11-839878 DEBUG    Installed packages: 222
18:21:11-841878 DEBUG    Extensions all: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg']
18:21:11-845878 DEBUG    Running extension installer: E:\SD_Next\extensions-builtin\clip-interrogator-ext\install.py
18:21:17-151817 DEBUG    Running extension installer: E:\SD_Next\extensions-builtin\sd-extension-system-info\install.py
18:21:17-815964 DEBUG    Running extension installer: E:\SD_Next\extensions-builtin\sd-webui-agent-scheduler\install.py
18:21:18-491068 DEBUG    Running extension installer: E:\SD_Next\extensions-builtin\sd-webui-controlnet\install.py
18:21:19-171411 DEBUG    Running extension installer:
                         E:\SD_Next\extensions-builtin\stable-diffusion-webui-images-browser\install.py
18:21:19-839889 DEBUG    Running extension installer: E:\SD_Next\extensions-builtin\stable-diffusion-webui-rembg\install.py
18:21:20-538864 DEBUG    Extensions all: ['sd-webui-reactor', 'ultimate-upscale-for-automatic1111']
18:21:20-540864 DEBUG    Running extension installer: E:\SD_Next\extensions\sd-webui-reactor\install.py
18:21:34-141581 INFO     Extension installed packages: sd-webui-reactor ['Cython==3.0.9', 'albumentations==1.4.2',
                         'scikit-learn==1.4.1.post1', 'joblib==1.3.2', 'insightface==0.7.3']
18:21:34-373932 INFO     Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner',
                         'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'sd-webui-reactor',
                         'ultimate-upscale-for-automatic1111']
18:21:34-378932 INFO     Verifying requirements
18:21:34-389932 DEBUG    Setup complete without errors: 1711351294
18:21:34-408937 DEBUG    Extension preload: {'extensions-builtin': 0.01, 'extensions': 0.0}
18:21:34-410939 DEBUG    Starting module: <module 'webui' from 'E:\\SD_Next\\webui.py'>
18:21:34-414279 INFO     Command line args: ['--debug', '--use-zluda', '--autolaunch'] autolaunch=True use_zluda=True
                         debug=True
18:21:34-418279 DEBUG    Env flags: []
18:21:41-030431 INFO     Load packages: {'torch': '2.2.1+cpu', 'diffusers': '0.27.0', 'gradio': '3.43.2'}
18:21:42-227494 DEBUG    Read: file="config.json" json=30 bytes=1279 time=0.000
18:21:42-232497 DEBUG    Unknown settings: ['cross_attention_options']
18:21:42-234497 INFO     Engine: backend=Backend.DIFFUSERS compute=cpu device=cpu attention="Scaled-Dot-Product" mode=no_grad
18:21:42-237494 INFO     Device:
18:21:42-239493 DEBUG    Read: file="html\reference.json" json=36 bytes=21493 time=0.000
18:21:43-570071 DEBUG    ONNX: version=1.17.1 provider=CPUExecutionProvider, available=['TensorrtExecutionProvider',
                         'CUDAExecutionProvider', 'CPUExecutionProvider']
18:21:43-727160 DEBUG    Importing LDM
18:21:43-756160 DEBUG    Entering start sequence
18:21:43-760159 DEBUG    Initializing
18:21:43-788159 INFO     Available VAEs: path="models\VAE" items=0
18:21:43-791159 INFO     Disabled extensions: ['sd-webui-controlnet']
18:21:43-795159 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=0 time=0.00
18:21:43-798159 DEBUG    Read: file="cache.json" json=2 bytes=588 time=0.000
18:21:43-803168 DEBUG    Read: file="metadata.json" json=96 bytes=236392 time=0.002
18:21:43-808165 INFO     Available models: path="models\Stable-diffusion" items=15 time=0.01
18:21:43-876218 DEBUG    Load extensions
18:21:44-113360 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py' 18:21:44-109358
                         INFO     LoRA networks: available=42 folders=2
18:21:44-511612 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite
                         file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
18:21:46-057273 DEBUG    Extensions init time: 2.18 clip-interrogator-ext=0.18 sd-webui-agent-scheduler=0.35
                         stable-diffusion-webui-images-browser=0.30 sd-webui-reactor=1.23
18:21:46-076269 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
18:21:46-080270 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
18:21:46-084270 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=0
18:21:46-092270 DEBUG    Load upscalers: total=52 downloaded=2 user=0 time=0.03 ['None', 'Lanczos', 'Nearest', 'ChaiNNer',
                         'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
18:21:46-122499 DEBUG    Load styles: folder="models\styles" items=288 time=0.02
18:21:46-127223 DEBUG    Creating UI
18:21:46-129323 INFO     UI theme: name="black-teal" style=Auto base=sdnext.css
18:21:46-143032 DEBUG    UI initialize: txt2img
18:21:46-228291 DEBUG    Extra networks: page='model' items=51 subfolders=3 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.06 thumb=0.01 desc=0.01 info=0.01 workers=4
18:21:46-276186 DEBUG    Extra networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.06 thumb=0.00 desc=0.00 info=0.00 workers=4
18:21:46-284184 DEBUG    Extra networks: page='embedding' items=6 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.04 thumb=0.01 desc=0.01 info=0.01 workers=4
18:21:46-291184 DEBUG    Extra networks: page='hypernetwork' items=0 subfolders=0 tab=txt2img folders=['models\\hypernetworks']
                         list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
18:21:46-298186 DEBUG    Extra networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4
18:21:46-310189 DEBUG    Extra networks: page='lora' items=42 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.06 thumb=0.01 desc=0.03 info=0.05 workers=4
18:21:46-403710 DEBUG    UI initialize: img2img
18:21:46-523241 DEBUG    UI initialize: control models=models\control
18:21:46-972383 DEBUG    Read: file="ui-config.json" json=49 bytes=3150 time=0.000
18:21:47-071411 DEBUG    Themes: builtin=12 gradio=5 huggingface=55
18:21:48-624103 DEBUG    Extension list: processed=296 installed=10 enabled=9 disabled=1 visible=296 hidden=0
18:21:49-037822 DEBUG    Root paths: ['E:\\SD_Next']
18:21:49-135333 INFO     Local URL: http://127.0.0.1:7860/
18:21:49-137333 DEBUG    Gradio functions: registered=2224
18:21:49-139333 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
18:21:49-143332 DEBUG    Creating API
18:21:49-234851 DEBUG    SD-System-Info: benchmark data loaded:
                         E:\SD_Next\extensions-builtin\sd-extension-system-info\scripts\benchmark-data-local.json
18:21:49-320363 INFO     [AgentScheduler] Task queue is empty
18:21:49-323367 INFO     [AgentScheduler] Registering APIs
18:21:49-632579 DEBUG    Scripts setup: ['IP Adapters:0.016', 'AnimateDiff:0.009', 'ReActor:0.035', 'X/Y/Z Grid:0.012',
                         'Face:0.014', 'Image-to-Video:0.007', 'Stable Video Diffusion:0.006', 'Ultimate SD upscale:0.006']
18:21:49-636580 DEBUG    Model metadata: file="metadata.json" no changes
18:21:49-638578 DEBUG    Model requested: fn=<lambda>
18:21:49-640579 INFO     Select: model="realisticVisionV51_v51VAE [15012c538f]"
18:21:49-642578 DEBUG    Load model: existing=False
                         target=E:\SD_Next\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors info=None
18:21:49-646578 INFO     Torch override VAE dtype: no-half set
18:21:49-647578 DEBUG    Desired Torch parameters: dtype=FP32 no-half=False no-half-vae=True upscast=True
18:21:49-651578 INFO     Setting Torch parameters: device=cpu dtype=torch.float32 vae=torch.float32 unet=torch.float32
                         context=no_grad fp16=None bf16=None optimization=Scaled-Dot-Product
18:21:49-656578 DEBUG    Diffusers loading: path="E:\SD_Next\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors"
18:21:49-658578 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
                         file="E:\SD_Next\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors" size=2034MB
18:21:53-028262 INFO     MOTD: N/A
18:21:53-796222 DEBUG    Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype':
                         torch.float32, 'load_connected_pipeline': True, 'extract_ema': True, 'original_config_file':
                         'configs/v1-inference.yaml', 'use_safetensors': True}
18:21:53-802221 DEBUG    Setting model: enable model CPU offload
18:21:53-810223 DEBUG    Setting model: enable VAE slicing
18:21:54-075716 INFO     Load embeddings: loaded=5 skipped=1 time=0.26
18:21:54-369268 DEBUG    GC: collected=0 device=cpu {'ram': {'used': 4.63, 'total': 31.92}} time=0.29
18:21:54-379265 INFO     Load model: time=4.43 load=4.43 native=512 {'ram': {'used': 4.63, 'total': 31.92}}
18:21:54-387225 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.48 system-info.py:app_started=0.08
                         task_scheduler.py:app_started=0.32
18:21:54-390223 DEBUG    Save: file="config.json" json=30 bytes=1240 time=0.004
18:21:54-395222 INFO     Startup time: 19.96 torch=5.37 olive=0.08 gradio=1.16 libraries=2.70 extensions=2.18 face-restore=0.07
                         ui-en=0.28 ui-txt2img=0.07 ui-img2img=0.08 ui-control=0.12 ui-extras=0.24 ui-settings=0.22
                         ui-extensions=1.47 ui-defaults=0.09 launch=0.41 api=0.08 app-started=0.41 checkpoint=4.75
18:21:54-404220 DEBUG    Unused settings: ['cross_attention_options']
18:21:54-423739 INFO     Launching browser
18:21:59-675137 DEBUG    Server: alive=True jobs=1 requests=85 uptime=18 memory=4.63/31.92 backend=Backend.DIFFUSERS state=idle
18:22:11-296159 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 OPR/107.0.0.0
18:22:13-467641 INFO     MOTD: N/A
18:22:31-457749 DEBUG    Themes: builtin=12 gradio=5 huggingface=55
18:22:43-682552 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 OPR/107.0.0.0
18:24:00-442127 DEBUG    Server: alive=True jobs=1 requests=212 uptime=139 memory=4.63/31.92 backend=Backend.DIFFUSERS
                         state=idle


### Backend

Original

### Branch

Master

### Model

SD 1.5

### Acknowledgements

- [X] I have read the above and searched for existing issues
- [X] I confirm that this is classified correctly and its not an extension issue
vladmandic commented 7 months ago
mr-september commented 7 months ago

Thank you for the suggestions.

I have run --reinstall --use-directml, and am still stuck on CPU.

For zluda, I googled this issue and found that my GPU (RX 5700) was not compatible. In my original post, I only switched from --use-directml to --use-zluda as a last ditch attempt at troubleshooting. Previously it has always been --use-directml.

New log:

Using VENV: E:\SD_Next\venv
23:15:21-233075 INFO     Starting SD.Next
23:15:21-242076 INFO     Logger: file="E:\SD_Next\sdnext.log" level=INFO size=15982 mode=append
23:15:21-246076 INFO     Python 3.10.11 on Windows
23:15:21-462192 INFO     Version: app=sd.next updated=2024-03-21 hash=82973c49 branch=master
                         url=https://github.com/vladmandic/automatic.git/tree/master
23:15:22-459910 INFO     Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.11
23:15:22-464910 INFO     Using DirectML Backend
23:15:22-465910 INFO     Installing package: torch-directml
23:16:02-604318 INFO     Installing package: onnxruntime-directml
23:16:07-942781 INFO     Installing package: torch-directml
23:16:10-630714 WARNING  Modified files: ['repositories/BLIP/BLIP.gif', 'repositories/CodeFormer/.gitignore']
23:16:10-632713 INFO     Forcing reinstall of all packages
23:16:10-633714 INFO     Startup: standard
23:16:10-634714 INFO     Verifying requirements
23:16:10-640716 INFO     Installing package: setuptools
23:16:21-902612 INFO     Installing package: patch-ng
23:16:24-210105 INFO     Installing package: anyio
23:16:26-482196 INFO     Installing package: addict
23:16:29-022890 INFO     Installing package: astunparse
23:16:31-437665 INFO     Installing package: blendmodes
23:16:34-009416 INFO     Installing package: clean-fid
23:16:36-569166 INFO     Installing package: filetype
23:16:38-852415 INFO     Installing package: future
23:16:41-144484 INFO     Installing package: GitPython
23:16:43-466064 INFO     Installing package: httpcore
23:16:46-140693 INFO     Installing package: inflection
23:16:48-667387 INFO     Installing package: jsonmerge
23:16:50-971538 INFO     Installing package: kornia
23:16:53-541550 INFO     Installing package: lark
23:16:55-843862 INFO     Installing package: lpips
23:16:58-601905 INFO     Installing package: omegaconf
23:17:01-428349 INFO     Installing package: open-clip-torch
23:17:04-040094 INFO     Installing package: optimum
23:17:08-000395 INFO     Installing package: piexif
23:17:10-291383 INFO     Installing package: psutil
23:17:12-709429 INFO     Installing package: pyyaml
23:17:15-009296 INFO     Installing package: resize-right
23:17:17-527566 INFO     Installing package: rich
23:17:20-031750 INFO     Installing package: safetensors
23:17:22-607369 INFO     Installing package: scipy
23:17:24-979639 INFO     Installing package: tensordict==0.1.2
23:17:27-539507 INFO     Installing package: toml
23:17:29-881748 INFO     Installing package: torchdiffeq
23:17:32-485521 INFO     Installing package: voluptuous
23:17:35-106732 INFO     Installing package: yapf
23:17:37-673188 INFO     Installing package: scikit-image
23:17:40-014743 INFO     Installing package: fasteners
23:17:42-576558 INFO     Installing package: dctorch
23:17:45-125108 INFO     Installing package: pymatting
23:17:47-708617 INFO     Installing package: peft
23:17:50-289595 INFO     Installing package: orjson
23:17:52-737025 INFO     Installing package: httpx==0.24.1
23:17:55-347619 INFO     Installing package: compel==2.0.2
23:17:57-901413 INFO     Installing package: torchsde==0.2.6
23:18:00-375279 INFO     Installing package: clip-interrogator==0.6.0
23:18:03-064876 INFO     Installing package: antlr4-python3-runtime==4.9.3
23:18:05-438976 INFO     Installing package: requests==2.31.0
23:18:07-800051 INFO     Installing package: tqdm==4.66.1
23:18:10-114638 INFO     Installing package: accelerate==0.28.0
23:18:12-464690 INFO     Installing package: opencv-contrib-python-headless==4.9.0.80
23:18:15-052719 INFO     Installing package: diffusers==0.27.0
23:18:17-684409 INFO     Installing package: einops==0.4.1
23:18:20-192560 INFO     Installing package: gradio==3.43.2
23:18:22-743086 INFO     Installing package: huggingface_hub==0.21.4
23:18:25-126079 INFO     Installing package: numexpr==2.8.8
23:18:27-511815 INFO     Installing package: numpy==1.26.4
23:18:29-985710 INFO     Installing package: numba==0.59.0
23:18:32-389913 INFO     Installing package: pandas
23:18:35-009929 INFO     Installing package: protobuf==3.20.3
23:18:37-579341 INFO     Installing package: pytorch_lightning==1.9.4
23:18:40-138576 INFO     Installing package: tokenizers==0.15.2
23:18:42-674174 INFO     Installing package: transformers==4.38.2
23:18:45-108382 INFO     Installing package: tomesd==0.1.3
23:18:47-696106 INFO     Installing package: urllib3==1.26.18
23:18:49-991037 INFO     Installing package: Pillow==10.2.0
23:18:52-439277 INFO     Installing package: timm==0.9.12
23:18:55-003409 INFO     Installing package: pydantic==1.10.13
23:18:57-396694 INFO     Installing package: typing-extensions==4.9.0
23:18:59-958459 INFO     Verifying packages
23:18:59-959458 INFO     Installing package: git+https://github.com/openai/CLIP.git
23:19:06-353752 INFO     Installing package:
                         git+https://github.com/patrickvonplaten/invisible-watermark.git@remove_onnxruntime_depedency
23:19:12-547100 INFO     Installing package: pi-heif
23:19:15-167521 INFO     Installing package: tensorflow==2.13.0
23:19:28-525269 INFO     Verifying submodules
23:19:53-108673 INFO     Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner',
                         'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'sd-webui-reactor',
                         'ultimate-upscale-for-automatic1111']
23:19:53-115667 INFO     Verifying requirements
23:19:53-117668 INFO     Installing package: setuptools
23:19:55-669989 INFO     Installing package: patch-ng
23:19:58-014064 INFO     Installing package: anyio
23:20:00-359436 INFO     Installing package: addict
23:20:02-671191 INFO     Installing package: astunparse
23:20:04-946801 INFO     Installing package: blendmodes
23:20:13-570386 INFO     Installing package: clean-fid
23:20:15-897803 INFO     Installing package: filetype
23:20:18-238518 INFO     Installing package: future
23:20:20-528913 INFO     Installing package: GitPython
23:20:22-837174 INFO     Installing package: httpcore
23:20:25-467538 INFO     Installing package: inflection
23:20:27-847198 INFO     Installing package: jsonmerge
23:20:30-241793 INFO     Installing package: kornia
23:20:32-599388 INFO     Installing package: lark
23:20:34-862181 INFO     Installing package: lpips
23:20:37-195117 INFO     Installing package: omegaconf
23:20:39-520617 INFO     Installing package: open-clip-torch
23:20:41-871524 INFO     Installing package: optimum
23:20:44-267140 INFO     Installing package: piexif
23:20:46-569337 INFO     Installing package: psutil
23:20:48-927149 INFO     Installing package: pyyaml
23:20:51-262839 INFO     Installing package: resize-right
23:20:53-559929 INFO     Installing package: rich
23:20:55-912494 INFO     Installing package: safetensors
23:20:58-283972 INFO     Installing package: scipy
23:21:00-642537 INFO     Installing package: tensordict==0.1.2
23:21:02-935603 INFO     Installing package: toml
23:21:05-220815 INFO     Installing package: torchdiffeq
23:21:07-559336 INFO     Installing package: voluptuous
23:21:09-870494 INFO     Installing package: yapf
23:21:12-199118 INFO     Installing package: scikit-image
23:21:14-548637 INFO     Installing package: fasteners
23:21:16-826366 INFO     Installing package: dctorch
23:21:19-127047 INFO     Installing package: pymatting
23:21:21-424033 INFO     Installing package: peft
23:21:23-778523 INFO     Installing package: orjson
23:21:26-279053 INFO     Installing package: httpx==0.24.1
23:21:28-931094 INFO     Installing package: compel==2.0.2
23:21:31-309373 INFO     Installing package: torchsde==0.2.6
23:21:33-616999 INFO     Installing package: clip-interrogator==0.6.0
23:21:36-077828 INFO     Installing package: antlr4-python3-runtime==4.9.3
23:21:38-428053 INFO     Installing package: requests==2.31.0
23:21:40-709471 INFO     Installing package: tqdm==4.66.1
23:21:43-045823 INFO     Installing package: accelerate==0.28.0
23:21:45-355931 INFO     Installing package: opencv-contrib-python-headless==4.9.0.80
23:21:47-723803 INFO     Installing package: diffusers==0.27.0
23:21:50-033196 INFO     Installing package: einops==0.4.1
23:21:52-340991 INFO     Installing package: gradio==3.43.2
23:21:56-988556 INFO     Installing package: huggingface_hub==0.21.4
23:21:59-326790 INFO     Installing package: numexpr==2.8.8
23:22:01-698774 INFO     Installing package: numpy==1.26.4
23:22:04-174111 INFO     Installing package: numba==0.59.0
23:22:06-560948 INFO     Installing package: pandas
23:22:08-966588 INFO     Installing package: protobuf==3.20.3
23:22:11-384848 INFO     Installing package: pytorch_lightning==1.9.4
23:22:13-737050 INFO     Installing package: tokenizers==0.15.2
23:22:16-169505 INFO     Installing package: transformers==4.38.2
23:22:18-534281 INFO     Installing package: tomesd==0.1.3
23:22:20-836837 INFO     Installing package: urllib3==1.26.18
23:22:23-168859 INFO     Installing package: Pillow==10.2.0
23:22:25-579275 INFO     Installing package: timm==0.9.12
23:22:27-890818 INFO     Installing package: pydantic==1.10.13
23:22:30-272818 INFO     Installing package: typing-extensions==4.9.0
23:22:34-653682 INFO     Command line args: ['--medvram', '--autolaunch', '--reinstall', '--use-directml'] medvram=True
                         autolaunch=True use_directml=True reinstall=True
23:22:49-248250 INFO     Load packages: {'torch': '2.0.0+cpu', 'diffusers': '0.27.0', 'gradio': '3.43.2'}
23:22:55-745949 INFO     Engine: backend=Backend.DIFFUSERS compute=directml device=privateuseone:0 attention="Dynamic Attention
                         BMM" mode=no_grad
23:22:55-862098 INFO     Device: device=AMD Radeon RX 5700 XT 50th Anniversary n=1 directml=0.2.0.dev230426
23:23:01-875173 INFO     Available VAEs: path="models\VAE" items=0
23:23:01-879173 INFO     Disabled extensions: ['sd-webui-controlnet']
23:23:02-147547 INFO     Available models: path="models\Stable-diffusion" items=15 time=0.27
23:23:02-160547 INFO     Installing package: basicsr
23:23:04-843660 INFO     Installing package: gfpgan
23:23:08-386706 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py' 23:23:08-369708
                         INFO     LoRA networks: available=42 folders=2
23:23:10-412170 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite
                         file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
23:23:13-534086 INFO     UI theme: name="black-teal" style=Auto base=sdnext.css
23:23:17-501929 INFO     Local URL: http://127.0.0.1:7860/
23:23:17-687216 INFO     [AgentScheduler] Task queue is empty
23:23:17-689217 INFO     [AgentScheduler] Registering APIs
23:23:17-830297 INFO     Select: model="realisticVisionV51_v51VAE [15012c538f]"
23:23:17-833297 INFO     Torch override VAE dtype: no-half set
23:23:17-835297 INFO     Setting Torch parameters: device=privateuseone:0 dtype=torch.float32 vae=torch.float32
                         unet=torch.float32 context=no_grad fp16=None bf16=None optimization=Dynamic Attention BMM
23:23:17-840298 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
                         file="E:\SD_Next\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors" size=2034MB
23:23:28-190653 INFO     Load embeddings: loaded=5 skipped=1 time=0.33
23:23:28-511705 INFO     Load model: time=10.36 load=10.36 native=512 {'ram': {'used': 4.68, 'total': 31.92}, 'gpu': {'used':
                         0.0, 'total': 0.01}, 'retries': 0, 'oom': 0}
23:23:28-518702 INFO     Startup time: 53.84 torch=8.68 onnx=0.12 olive=0.24 gradio=5.53 libraries=12.51 ldm=0.06 samplers=0.05
                         extensions=4.81 models=0.27 face-restore=5.92 upscalers=0.09 networks=0.57 ui-en=0.29 ui-txt2img=0.07
                         ui-img2img=0.08 ui-control=0.75 ui-extras=0.24 ui-settings=0.33 ui-extensions=1.62 ui-defaults=0.31
                         launch=0.26 api=0.08 app-started=0.23 checkpoint=10.69
23:23:28-534703 INFO     Launching browser
23:23:32-900108 INFO     MOTD: N/A
23:23:46-774758 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 OPR/107.0.0.0
vladmandic commented 7 months ago

in the last log torch-directml was correctly identified and installed, but it seems you have torch also installed as a system package (outside of venv) so when sdnext was starting, python decided that one has higher priority. make sure that no torch is installed before running installer.

mr-september commented 7 months ago

I have uninstalled system torch, and did another --reinstall --use-directml.

My System -> System Info page still seems to indicate CPU torch

image

and attempting to generate any random text 2 image fails, Torch not compiled with CUDA enabled.

Using VENV: E:\SD_Next\venv
12:33:16-653051 INFO     Starting SD.Next
12:33:16-657051 INFO     Logger: file="E:\SD_Next\sdnext.log" level=INFO size=69908 mode=append
12:33:16-659051 INFO     Python 3.10.11 on Windows
12:33:16-879084 INFO     Version: app=sd.next updated=2024-03-21 hash=82973c49 branch=master
                         url=https://github.com/vladmandic/automatic.git/tree/master
12:33:17-361549 INFO     Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.11
12:33:17-370548 INFO     Using DirectML Backend
12:33:17-373547 INFO     Installing package: torch-directml
12:33:22-629580 INFO     Installing package: onnxruntime-directml
12:33:25-285265 INFO     Installing package: torch-directml
12:33:27-867267 WARNING  Modified files: ['repositories/BLIP/BLIP.gif', 'repositories/CodeFormer/.gitignore']
12:33:27-869267 INFO     Forcing reinstall of all packages
12:33:27-871271 INFO     Startup: standard
12:33:27-872268 INFO     Verifying requirements
12:33:27-873268 INFO     Installing package: setuptools
12:33:30-516772 INFO     Installing package: patch-ng
12:33:32-956726 INFO     Installing package: anyio
12:33:35-478518 INFO     Installing package: addict
12:33:37-989170 INFO     Installing package: astunparse
12:49:04-423129 INFO     Installing package: blendmodes
12:49:07-165459 INFO     Installing package: clean-fid
12:49:10-020548 INFO     Installing package: filetype
12:49:12-594100 INFO     Installing package: future
12:49:15-258609 INFO     Installing package: GitPython
12:49:17-848347 INFO     Installing package: httpcore
12:49:20-814231 INFO     Installing package: inflection
12:49:23-325051 INFO     Installing package: jsonmerge
12:49:26-083375 INFO     Installing package: kornia
12:49:28-592236 INFO     Installing package: lark
12:49:31-109587 INFO     Installing package: lpips
12:49:33-637486 INFO     Installing package: omegaconf
12:49:36-222925 INFO     Installing package: open-clip-torch
12:49:39-300012 INFO     Installing package: optimum
12:49:42-110791 INFO     Installing package: piexif
12:49:44-601110 INFO     Installing package: psutil
12:49:47-135631 INFO     Installing package: pyyaml
12:49:49-661785 INFO     Installing package: resize-right
12:49:52-123002 INFO     Installing package: rich
12:49:54-712699 INFO     Installing package: safetensors
12:49:57-315626 INFO     Installing package: scipy
12:49:59-914868 INFO     Installing package: tensordict==0.1.2
12:50:02-624181 INFO     Installing package: toml
12:50:05-158343 INFO     Installing package: torchdiffeq
12:50:07-648961 INFO     Installing package: voluptuous
12:50:10-189645 INFO     Installing package: yapf
12:50:12-688614 INFO     Installing package: scikit-image
12:50:15-598188 INFO     Installing package: fasteners
12:50:18-440689 INFO     Installing package: dctorch
12:50:21-292042 INFO     Installing package: pymatting
12:50:24-003695 INFO     Installing package: peft
12:50:26-743189 INFO     Installing package: orjson
12:50:29-424412 INFO     Installing package: httpx==0.24.1
12:50:32-583777 INFO     Installing package: compel==2.0.2
12:50:35-446475 INFO     Installing package: torchsde==0.2.6
12:50:38-190323 INFO     Installing package: clip-interrogator==0.6.0
12:50:41-004938 INFO     Installing package: antlr4-python3-runtime==4.9.3
12:50:43-468446 INFO     Installing package: requests==2.31.0
12:50:45-986196 INFO     Installing package: tqdm==4.66.1
12:50:48-608532 INFO     Installing package: accelerate==0.28.0
12:50:51-095292 INFO     Installing package: opencv-contrib-python-headless==4.9.0.80
12:50:53-895978 INFO     Installing package: diffusers==0.27.0
12:50:56-929141 INFO     Installing package: einops==0.4.1
12:50:59-690978 INFO     Installing package: gradio==3.43.2
12:51:02-925329 INFO     Installing package: huggingface_hub==0.21.4
12:51:05-420077 INFO     Installing package: numexpr==2.8.8
12:51:08-099915 INFO     Installing package: numpy==1.26.4
12:51:10-689727 INFO     Installing package: numba==0.59.0
12:51:13-193982 INFO     Installing package: pandas
12:51:15-742042 INFO     Installing package: protobuf==3.20.3
12:51:18-323442 INFO     Installing package: pytorch_lightning==1.9.4
12:51:20-843331 INFO     Installing package: tokenizers==0.15.2
12:51:23-435163 INFO     Installing package: transformers==4.38.2
12:51:25-963830 INFO     Installing package: tomesd==0.1.3
12:51:28-885452 INFO     Installing package: urllib3==1.26.18
12:51:31-386560 INFO     Installing package: Pillow==10.2.0
12:51:33-977729 INFO     Installing package: timm==0.9.12
12:51:36-686045 INFO     Installing package: pydantic==1.10.13
12:51:39-210047 INFO     Installing package: typing-extensions==4.9.0
12:51:41-653049 INFO     Verifying packages
12:51:41-654050 INFO     Installing package: git+https://github.com/openai/CLIP.git
12:51:47-310806 INFO     Installing package:
                         git+https://github.com/patrickvonplaten/invisible-watermark.git@remove_onnxruntime_depedency
12:51:54-502280 INFO     Installing package: pi-heif
12:51:57-197673 INFO     Installing package: tensorflow==2.13.0
12:52:14-209832 INFO     Verifying submodules
12:52:27-582314 INFO     Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner',
                         'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'sd-webui-reactor',
                         'ultimate-upscale-for-automatic1111']
12:52:27-587312 INFO     Verifying requirements
12:52:27-589317 INFO     Installing package: setuptools
12:52:30-253661 INFO     Installing package: patch-ng
12:52:32-896111 INFO     Installing package: anyio
12:52:35-375280 INFO     Installing package: addict
12:52:38-027492 INFO     Installing package: astunparse
12:52:40-500204 INFO     Installing package: blendmodes
12:52:48-784954 INFO     Installing package: clean-fid
12:52:51-297493 INFO     Installing package: filetype
12:52:53-709370 INFO     Installing package: future
12:52:56-179816 INFO     Installing package: GitPython
12:52:58-642143 INFO     Installing package: httpcore
12:53:01-447945 INFO     Installing package: inflection
12:53:03-908273 INFO     Installing package: jsonmerge
12:53:06-373082 INFO     Installing package: kornia
12:53:08-828891 INFO     Installing package: lark
12:53:11-310029 INFO     Installing package: lpips
12:53:13-760996 INFO     Installing package: omegaconf
12:53:16-230179 INFO     Installing package: open-clip-torch
12:53:18-694770 INFO     Installing package: optimum
12:53:21-224678 INFO     Installing package: piexif
12:53:23-649572 INFO     Installing package: psutil
12:53:26-107789 INFO     Installing package: pyyaml
12:53:28-572647 INFO     Installing package: resize-right
12:53:31-032504 INFO     Installing package: rich
12:53:33-547915 INFO     Installing package: safetensors
12:53:36-045273 INFO     Installing package: scipy
12:53:38-527591 INFO     Installing package: tensordict==0.1.2
12:53:41-002542 INFO     Installing package: toml
12:53:43-392751 INFO     Installing package: torchdiffeq
12:53:45-844750 INFO     Installing package: voluptuous
12:53:48-261509 INFO     Installing package: yapf
12:53:50-712123 INFO     Installing package: scikit-image
12:53:53-214833 INFO     Installing package: fasteners
12:53:55-671073 INFO     Installing package: dctorch
12:53:58-126813 INFO     Installing package: pymatting
12:54:00-603860 INFO     Installing package: peft
12:54:03-040799 INFO     Installing package: orjson
12:54:05-642282 INFO     Installing package: httpx==0.24.1
12:54:08-409281 INFO     Installing package: compel==2.0.2
12:54:10-887291 INFO     Installing package: torchsde==0.2.6
12:54:13-331587 INFO     Installing package: clip-interrogator==0.6.0
12:54:15-806524 INFO     Installing package: antlr4-python3-runtime==4.9.3
12:54:18-243464 INFO     Installing package: requests==2.31.0
12:54:20-688878 INFO     Installing package: tqdm==4.66.1
12:54:23-194651 INFO     Installing package: accelerate==0.28.0
12:54:25-652859 INFO     Installing package: opencv-contrib-python-headless==4.9.0.80
12:54:28-143011 INFO     Installing package: diffusers==0.27.0
12:54:30-631284 INFO     Installing package: einops==0.4.1
12:54:33-077785 INFO     Installing package: gradio==3.43.2
12:54:37-818799 INFO     Installing package: huggingface_hub==0.21.4
12:54:40-298664 INFO     Installing package: numexpr==2.8.8
12:54:42-788837 INFO     Installing package: numpy==1.26.4
12:54:45-404541 INFO     Installing package: numba==0.59.0
12:54:47-893875 INFO     Installing package: pandas
12:54:50-398457 INFO     Installing package: protobuf==3.20.3
12:54:52-962479 INFO     Installing package: pytorch_lightning==1.9.4
12:54:55-447198 INFO     Installing package: tokenizers==0.15.2
12:54:58-050460 INFO     Installing package: transformers==4.38.2
12:55:00-569363 INFO     Installing package: tomesd==0.1.3
12:55:02-989954 INFO     Installing package: urllib3==1.26.18
12:55:05-458184 INFO     Installing package: Pillow==10.2.0
12:55:08-036208 INFO     Installing package: timm==0.9.12
12:55:10-480720 INFO     Installing package: pydantic==1.10.13
12:55:13-025067 INFO     Installing package: typing-extensions==4.9.0
12:55:17-542089 INFO     Command line args: ['--medvram', '--reinstall', '--autolaunch', '--use-directml'] medvram=True
                         autolaunch=True use_directml=True reinstall=True
12:55:23-148875 INFO     Load packages: {'torch': '2.0.0+cpu', 'diffusers': '0.27.0', 'gradio': '3.43.2'}
12:55:24-487886 INFO     Engine: backend=Backend.DIFFUSERS compute=directml device=privateuseone:0 attention="Dynamic Attention
                         BMM" mode=no_grad
12:55:24-566405 INFO     Device: device=AMD Radeon RX 5700 XT 50th Anniversary n=1 directml=0.2.0.dev230426
12:55:25-541270 INFO     Available VAEs: path="models\VAE" items=0
12:55:25-544274 INFO     Disabled extensions: ['sd-webui-controlnet']
12:55:25-557270 INFO     Available models: path="models\Stable-diffusion" items=15 time=0.01
12:55:25-562270 INFO     Installing package: basicsr
12:55:28-332998 INFO     Installing package: gfpgan
12:55:31-252895 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py' 12:55:31-248895
                         INFO     LoRA networks: available=42 folders=2
12:55:31-766650 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite
                         file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
12:55:32-233329 INFO     UI theme: name="black-teal" style=Auto base=sdnext.css
12:55:35-391115 INFO     Local URL: http://127.0.0.1:7860/
12:55:35-564496 INFO     [AgentScheduler] Task queue is empty
12:55:35-566495 INFO     [AgentScheduler] Registering APIs
12:55:35-713017 INFO     Select: model="realisticVisionV51_v51VAE [15012c538f]"
12:55:35-717014 INFO     Torch override VAE dtype: no-half set
12:55:35-718013 INFO     Setting Torch parameters: device=privateuseone:0 dtype=torch.float32 vae=torch.float32
                         unet=torch.float32 context=no_grad fp16=None bf16=None optimization=Dynamic Attention BMM
12:55:35-723017 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
                         file="E:\SD_Next\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors" size=2034MB
12:55:37-919032 INFO     Load embeddings: loaded=5 skipped=1 time=0.26
12:55:38-176572 INFO     Load model: time=2.21 load=2.20 native=512 {'ram': {'used': 4.67, 'total': 31.92}, 'gpu': {'used':
                         0.0, 'total': 0.01}, 'retries': 0, 'oom': 0}
12:55:38-183573 INFO     Startup time: 20.64 torch=4.23 olive=0.09 gradio=1.25 libraries=2.36 extensions=1.00 face-restore=5.63
                         ui-en=0.23 ui-txt2img=0.06 ui-img2img=0.08 ui-control=0.35 ui-extras=0.05 ui-models=0.25
                         ui-settings=0.22 ui-extensions=1.46 ui-defaults=0.08 launch=0.39 api=0.08 app-started=0.24
                         checkpoint=2.47
12:55:38-196574 INFO     Launching browser
12:55:41-942767 INFO     MOTD: N/A
12:55:55-576261 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 OPR/107.0.0.0
14:33:16-176212 INFO     Base: class=StableDiffusionPipeline
Progress ?it/s                                              0% 0/20 00:00 ? Base
14:33:18-225421 INFO     Torch not compiled with CUDA enabled
14:33:18-227422 WARNING  Processing returned no results
14:33:18-229422 INFO     Processed: images=0 time=2.34 its=0.00 memory={'ram': {'used': 5.49, 'total': 31.92}, 'gpu': {'used':
                         1.02, 'total': 1.47}, 'retries': 0, 'oom': 0}
vladmandic commented 7 months ago

this install looks clean, why are you saying its using CPU? torch-directml does not identify itself once installed, so system info may not show it - check actual GPU usage.

ccppoo commented 7 months ago

image

I installed with zluda, it takes 3 minute - 35 iteration 512x512 when generating

is this problem due to wrong installation?

options I use with webui.bat : --autolaunch --skip-torch --use-zluda


edit:

installation log :

21:17:41-645205 INFO     Verifying requirements
21:17:41-648898 WARNING  Package version mismatch: typing-extensions 4.10.0 required 4.9.0
21:17:41-649901 INFO     Installing package: typing-extensions==4.9.0
21:17:44-828416 INFO     Command line args: ['--autolaunch', '--skip-torch', '--use-zluda', '--use-xformers']
                         autolaunch=True use_zluda=True use_xformers=True skip_torch=True
21:17:51-787309 INFO     Load packages: {'torch': '2.2.1+cpu', 'diffusers': '0.27.0', 'gradio': '3.43.2'}
21:17:54-161151 INFO     Engine: backend=Backend.DIFFUSERS compute=cpu device=cpu attention="xFormers" mode=no_grad
21:17:54-162151 INFO     Device:
21:17:56-808198 INFO     Create: folder="models\ONNX"
21:17:56-809287 INFO     Create: folder="models\Stable-diffusion"
21:17:56-810487 INFO     Create: folder="models\Diffusers"

is torch having problem detecting GPU?

vladmandic commented 7 months ago

@ccppoo this thread is about directml, lets not confuse things. and for zluda, refer to wiki and reach out on discord.

mr-september commented 7 months ago

I see, however I am getting Torch not compiled with CUDA enabled, neither text 2 image nor image 2 image generates anything, so I can't check the actual GPU usage.

ccppoo commented 7 months ago

@mr-september I solved problem by reinstalling AMD "pro" driver instead of using Adrenaline. You could select "PRO" option in the latest AMD driver installer

PrincipalSkinner commented 7 months ago

I'm having this same issue. I've done some simple checks that I can do, and the torch used in my sdnext venv is the exact same as the torch used by lshqqytiger's AMD fork of the automatic1111 repository, which I do not have this issue with. I'm not sure what that means, if anything; just sharing that info. The same is true torch-directml, but not onnxruntime. I can't get the other (what is that UI's name? Do people just refer to it as WebUI?) to run with onnxruntime 1.17.1 and have to downgrade to 1.14.0 due to a dll load error, but that does not appear to be the issue here. To be sure, I did downgrade onnxruntime to 1.14.0 and the issue persisted, so I reinstalled 1.17.1 for SDNext.

One thing I will say that I find odd is that SDNext tells me every boot up that it's installing torch-directml, even though torch-directml is installed already.

installingDirectML torchdirectml

I don't think it can be an issue with a superseding package, because I have no packages in my python install. All packages are in the venvs for the UIs that I use.

nopackage

I'll attach my latest log to see if that helps to shine any light on this. sdnext.log

I see the comment above about using Pro drivers. I'm reluctant to switch my driver package because I'm not familiar with the differences, as well as the PRO revision being a year and a half old where Adrenalin just updated a couple weeks ago. It feels like if the issue is the driver used, I should have the same or similar issues on other UIs and I don't. I have, however, reinstalled the Adrenalin drivers and the issue persists.

If there's anything else I can provide to help resolve this issue, let me know.

mr-september commented 7 months ago

@PrincipalSkinner I'm sorry, I solved this Torch not compiled with CUDA enabled error a couple days ago but forgot what I did. It might have something to do with deleting torch, torchvision, and/or torch-directml from the venv folders, then manually doing pip installs. Or maybe it was diffusers. Or maybe something else entirely unrelated to deleting/re-installing packages.

Not sure though. Someone with more experience might be able to help.

PrincipalSkinner commented 6 months ago

UPDATE: I've reinstalled again, and this time it's generating without issue. So I'm starting to think that it's something I'm doing AFTER install that is causing issues. So this time I exited, and restarted and generating was still fine. Here are the two logs from the install and running.

[good install] sdnext.log [good run] sdnext.log

From here I'll retrace all of my steps after installing. Next will be to introduce --medvram --autolaunch and --experimental arguments. If those produce the failure, I'll attach logs for those. If not, I'll move to getting my model directory squared away. By this point it SHOULD be giving me the CUDA error.

Note for the following: --use-directml and --debug are present for all.

I am getting the error with --medvram, and generation was NOT successful. [medvram] sdnext.log

I am not getting the error with --autolaunch, and generation was successful. I am not getting the error with --experimental, and generation was successful. I am not getting the error with both --experimental and --autolaunch, and generation was successful.

I am not getting the error with --experimental --autolaunch and specifying a model directory, and generation was successful.

Out of curiosity, I tried it with --lowvram, and I did get an actual error (with stacktrace) while asserting the "torch not compiled with CUDA enabled" error, but generation is perfectly fine. The error does not show in the sdnext log, so this is from cmd.

output.txt

It looks like the issue is going to be something with the --medvram argument. I hope this helps to narrow it down. If there is anything else I can do to help, please let me know. I'm not too versed with python, but I do code in other languages (C#, TCL/TK, Lua, etc) so if you'd like for me to try some stuff I can probably do it if you give me explicit instructions.

@vladmandic -- only reason I'm pinging you is because I don't know if you saw this comment before I edited, and I don't know if github informs you when comments are edited.


UPDATE: After doing the below, I deleted my SDNext directories (the newest install was a temp directory) and reinstalled. This time I'm 100% certain I ran it with the --use-directml (and there is no indication in the log that it installed the CPU-only torch in the log), yet I'm getting the "Torch not compiled with CUDA enabled" again.

Here are the install and latest logs. [Intall] sdnext.log [latest] sdnext.log


IRRELEVANT INFORMATION BELOW

A reply that seems to have been deleted brought my attention here for a moment. While here, I checked to see how the development's going and figured I'd try out the dev branch, just to see if perhaps it would work. On the initial run I noticed something that I didn't see previously. It said something about installing CPU-only torch. Then I recalled that I didn't use the --use-directml argument when I called webui.bat. So I immediately halted the set up and tried it on master... and that seems to have been the problem. I swear I reinstalled the mater 2 times trying to get it to work, and utilized the --use-directml argument on the latter 2, but apparently I didn't.

So, if anyone else comes along and gets this error: reinstall and make sure you specify --use-directml in the initial run of the webui.bat file. This is specified in the install instructions, but I can see how it might be missed. I'm sure there's another way to fix it via removing/reinstalling packages, but I'm not sure which packages those are so reinstalling was pretty much the only option.

vladmandic commented 6 months ago

btw, good writeup, wish there were more like that!

if you install wrong torch by mistake, you can force reinstall by using something like --use-directml --reinstall

mr-september commented 6 months ago

Alright closing this since everyone seems to have solved it.

Basically:

  1. make sure your system/path python is 3.10 and not 3.11 or later (blame this on DirectML team being glacial and overly stringent on version requirements)
  2. --use-direectml --reinstall.
PrincipalSkinner commented 6 months ago

I haven't solved it. I've used --use-directml in the initial run, which installed everything correctly. The issue is with --medvram, and potentially --lowvram. Without either of those I do not get the "torch not compliled with CUDA" error, but that also means I'm very limited in what I can do on an 8GB card. With either of those arguments I do get that error. With --medvram no image is generated. With --lowvram a stacktrace is given along with the error (see log above), but an image IS generated.

I think the confusion is the order of my edits above. The latest information is at the top. All of my testing of various arguments was the last thing I did. I thought I cleared up this confusion when I mentioned that irrelevant information was below, which is where I spoke of ensuring that --use-directml was used when installing. I apologize for my vagueness.

However, just to be 100% sure, I've done exactly as stated here. With --medvram, which is my preferred argument, I still can not generate an image at all after doing --use-directml --reinstall. I've also tested checking the box in the settings for medvram instead of supplying the argument, in hopes that it would somehow be different, but nothing changed.

Reinstalling: [reinstall] sdnext.log Running again, specifying --medvram (torch not compiled with CUDA) [medvram] sdnext.log

Edit: Also, it still kind of bothers me that every time I run it says it's reinstalling torch-directml. I'm not sure if that's been looked into or if it's even relevant to what's going on with my main issue, but it doesn't hurt to mention it again.

From the reinstall log:

2024-04-18 09:53:42,204 | sd | INFO | installer | Installing package: torch-directml
2024-04-18 09:53:42,204 | sd | DEBUG | installer | Running pip: install --upgrade torch-directml 
...
2024-04-18 09:53:47,322 | sd | DEBUG | installer | Installing torch: torch-directml
2024-04-18 09:53:47,323 | sd | INFO | installer | Installing package: torch-directml
2024-04-18 09:53:47,325 | sd | DEBUG | installer | Running pip: install --upgrade torch-directml 

From the run right after reinstalling, where I supplied --medvram

2024-04-18 10:08:10,905 | sd | INFO | installer | Using DirectML Backend
2024-04-18 10:08:10,907 | sd | DEBUG | installer | Installing torch: torch-directml