Closed Ael07 closed 1 month ago
Anybody got the same problem? .. it's just so weird, it is working fine on CPU with any argument you put there, and not on GPU!
Can you give me the model, i run 1.5 on 5600xt no problem
Vào 06:05, T.3, 3 Th9, 2024 Ael07 @.***> đã viết:
Anybody got the same problem? .. it's just so weird, it is working fine on CPU with any argument you put there, and not on GPU!
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/529#issuecomment-2325366985, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7VS7K7XXAPG5MTTWZTY7CDZUTVKFAVCNFSM6AAAAABNOZB7DGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRVGM3DMOJYGU . You are receiving this because you are subscribed to this thread.Message ID: <lshqqytiger/stable-diffusion-webui-amdgpu/issues/529/2325366985@ github.com>
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929
64 bit (AMD64)]
Version: v1.10.1-amd-4-gb0d9eb6df
Commit hash: b0d9eb6df1f6631a49988a9f705ff568f908aa2b
H:\StableDiff\ImggenAMD\Packages\Stable Diffusion Web
UI\extensions\sd-webui-infinite-image-browsing\install.py:3:
DeprecationWarning: pkg_resources is deprecated as an API. See
https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
H:\StableDiff\ImggenAMD\Packages\Stable Diffusion Web
UI\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258:
LightningDeprecationWarning:
pytorch_lightning.utilities.distributed.rank_zero_only
has been
deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from
pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-directml --port 30248 --medvram
--api --skip-torch-cuda-test --skip-python-version-check --listen
--skip-version-check --enable-insecure-extension-access --ckpt-dir
'D:\zStableDiffModel' --deepdanbooru --disable-nan-check
--opt-sub-quad-attention --gradio-allowed-path
'H:\StableDiff\ImggenAMD\Images'
ONNX: version=1.18.0 provider=DmlExecutionProvider,
available=['DmlExecutionProvider', 'CPUExecutionProvider']
Civitai Helper: Root Path is: H:\StableDiff\ImggenAMD\Packages\Stable
Diffusion Web UI
Civitai Helper: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger
word completion will be limited to those added through the extra networks
menu.
Using sqlite file: H:\StableDiff\ImggenAMD\Packages\Stable Diffusion Web
UI\extensions\sd-webui-agent-scheduler\task_scheduler.sqlite3
Loading weights [efb352a7cb] from H:\StableDiff\ImggenAMD\Packages\Stable
Diffusion Web UI\models\Stable-diffusion\CheckpointYesmix_v50.safetensors
My output with launch command
Vào Th 3, 3 thg 9, 2024 vào lúc 08:30 Minh Nguyễn Tuấn < @.***> đã viết:
Can you give me the model, i run 1.5 on 5600xt no problem
Vào 06:05, T.3, 3 Th9, 2024 Ael07 @.***> đã viết:
Anybody got the same problem? .. it's just so weird, it is working fine on CPU with any argument you put there, and not on GPU!
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/529#issuecomment-2325366985, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7VS7K7XXAPG5MTTWZTY7CDZUTVKFAVCNFSM6AAAAABNOZB7DGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRVGM3DMOJYGU . You are receiving this because you are subscribed to this thread.Message ID: <lshqqytiger/stable-diffusion-webui-amdgpu/issues/529/2325366985@ github.com>
Are you sure you are running on GPU ? you are using --skip-torch-cuda-test and that usually direct it to CPU... as you can see from above with that it runs on my CPU but it is very slow. My AMD GPU is Firepro w7100 8GB... Also i noticed you are running Python 3.10.11, I'm running 3.10.6
Please reinstall, i see that onnx failed, my log show that onnx version 1.19
Vào 05:33, T.4, 4 Th9, 2024 Ael07 @.***> đã viết:
Are you sure you are running on GPU ? you are using --skip-torch-cuda-test and that usually direct it to CPU... as you can see from above with that it runs on my CPU but it is very slow. My AMD GPU is Firepro w7100 8GB... Also i noticed you are running Python 3.10.11, I'm running 3.10.6
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/529#issuecomment-2327555605, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7VS7K5AU5XMEEPLDWHKDKTZUY2KPAVCNFSM6AAAAABNOZB7DGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRXGU2TKNRQGU . You are receiving this because you commented.Message ID: @.*** com>
ONNX failed on CPU route as well, yet it worked just fine. I'm not sure it is the missing piece here. Thx @Ming2k8-Coder
Do you update lastest gpu driver, install visual C AIO from techpowerup
Vào 07:58, T.4, 4 Th9, 2024 Ael07 @.***> đã viết:
ONNX failed on CPU route as well, yet it worked just fine. I'm not sure it is the missing piece here. Thx @Ming2k8-Coder https://github.com/Ming2k8-Coder
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/529#issuecomment-2327707808, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7VS7K7POFMZLCOMIIJ64UTZUZLNBAVCNFSM6AAAAABNOZB7DGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRXG4YDOOBQHA . You are receiving this because you were mentioned.Message ID: @.*** com>
And install directX library (bcuz directML run on directX), consume 3D engine
Vào 08:25, T.4, 4 Th9, 2024 Minh Nguyễn Tuấn @.***> đã viết:
Do you update lastest gpu driver, install visual C AIO from techpowerup
Vào 07:58, T.4, 4 Th9, 2024 Ael07 @.***> đã viết:
ONNX failed on CPU route as well, yet it worked just fine. I'm not sure it is the missing piece here. Thx @Ming2k8-Coder https://github.com/Ming2k8-Coder
— Reply to this email directly, view it on GitHub https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu/issues/529#issuecomment-2327707808, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7VS7K7POFMZLCOMIIJ64UTZUZLNBAVCNFSM6AAAAABNOZB7DGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRXG4YDOOBQHA . You are receiving this because you were mentioned.Message ID: @.*** com>
I think directX is part of windows, I'm away from that computer for few days, but i think it is there already. I will check the GPU drivers when back. thanks I managed to run it months ago on GPU but then an updated of stable diffusion messed the whole GPU pathway for me. @Ming2k8-Coder
I have latest version of Directx on my windows, like i said it used to work for me on GPU, an update back in january messed things up... can anyone replicate the error? looks like i'm the only one with that error, nobody else is reporting it!
ok finally this problem is solved, crazy what happened... it was an issue of a driver of the GPU, thanks to ChatGPT i managed to narrow down the error to the driver and then i did a clean uninstall of the AMD driver and installed the latest version supported by my card from the AMD website... and voila!... basically for some reason it got corrupted or sthg as it was working fine before.
Checklist
What happened?
it is working with all the --nan-check and safe-unpickle, and ONNX failed when it runs on CPU through Skip Cuda test;
C:\Users..\stable-diffusion-webui-amdgpu>git pull Already up to date. venv "C:\Users..\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1-amd-4-gb0d9eb6d Commit hash: b0d9eb6df1f6631a49988a9f705ff568f908aa2b Installing onnxruntime-gpu no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning:
pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it frompytorch_lightning.utilities
instead. rank_zero_deprecation( Launching Web UI with arguments: --medvram --no-half --precision full --opt-sub-quad-attention --opt-split-attention-v1 --theme dark --autolaunch --disable-safe-unpickle --disable-nan-check --skip-torch-cuda-test Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled ONNX failed to initialize: DLL load failed while importing onnx_cpp2py_export: A dynamic link library (DLL) initialization routine failed. Loading weights [6ce0161689] from C:\Users..\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors Creating model from config: C:\Users..\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning:resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
. warnings.warn( Running on local URL: http://127.0.0.1:7860To create a public link, set
share=True
inlaunch()
. Startup time: 15.3s (prepare environment: 68.2s, initialize shared: 1.7s, other imports: 0.8s, load scripts: 0.7s, create ui: 1.0s, gradio launch: 0.4s). Applying attention optimization: sub-quadratic... done. Model loaded in 5.2s (load weights from disk: 0.7s, create model: 0.5s, apply weights to model: 2.8s, calculate empty prompt: 1.1s). 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [06:42<00:00, 20.13s/it] Total progress: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [06:32<00:00, 19.65s/it] Total progress: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [06:32<00:00, 20.02s/it]That's robust right there!!! and as soon as i switch to GPU using only --directml... i have this error;
Already up to date. venv "C:\Users..\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1-amd-4-gb0d9eb6d Commit hash: b0d9eb6df1f6631a49988a9f705ff568f908aa2b no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning:
pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it frompytorch_lightning.utilities
instead. rank_zero_deprecation( Launching Web UI with arguments: --theme dark --use-directml --medvram --autolaunch ONNX failed to initialize: DLL load failed while importing onnx_cpp2py_export: A dynamic link library (DLL) initialization routine failed. Loading weights [6ce0161689] from C:\Users..\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors Creating model from config: C:\Users..\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning:resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
. warnings.warn( Running on local URL: http://127.0.0.1:7860To create a public link, set
share=True
inlaunch()
. Startup time: 16.2s (prepare environment: 20.9s, initialize shared: 2.2s, other imports: 0.8s, load scripts: 0.7s, create ui: 1.0s, gradio launch: 0.7s). Applying attention optimization: InvokeAI... done. loading stable diffusion model: RuntimeError Traceback (most recent call last): File "C:\Users..\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "C:\Users..\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users..\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run self._target(*self._args, self._kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model shared.sd_model # noqa: B018 File "C:\Users..\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model return modules.sd_models.model_data.get_sd_model() File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model load_model() File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 880, in load_model sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 728, in get_empty_cond d = sd_model.get_learned_conditioning([""]) File "C:\Users..\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 313, in forward return super().forward(texts) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 227, in forward z = self.process_tokens(tokens, multipliers) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 269, in process_tokens z = self.encode_with_transformers(tokens) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 352, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1582, in _call_impl result = forward_call(*args, *kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward return self.text_model( File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "C:\Users..\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 734, in forward causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device) File "C:\Users..\stable-diffusion-webui-amdgpu\modules\dml\hijack\transformers.py", line 17, in _make_causal_mask mask = mask.to(dtype) RuntimeError: unknown errorStable diffusion model failed to load
Steps to reproduce the problem
1 clean install of stable-diffusion-webui-amdgpu 2 launch of Webui bat file with arguments --theme dark --use-directml --medvram --autolaunch 3 fail to load model
What should have happened?
For some reason the error starts in python 310 directory when using --directml and works great when using skip CUDA test!
I have no idea why it fails and would be great if someone can replicate the error and let me know what happens there. Thanks
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2024-09-01-12-00.json
Console logs
Additional information
No response