lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 174 forks source link

[Bug]: Windows10 + AMD6500XT failed to use SD #416

Closed LYC878484 closed 3 months ago

LYC878484 commented 3 months ago

Checklist

What happened?

comsole print RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check, at Windows10 + AMD6500XT

log: PS D:\GitResource\stable-diffusion-webui> .\webui-user.bat venv "D:\GitResource\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.8.0 Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 Traceback (most recent call last): File "D:\GitResource\stable-diffusion-webui\launch.py", line 48, in main() File "D:\GitResource\stable-diffusion-webui\launch.py", line 39, in main prepare_environment() File "D:\GitResource\stable-diffusion-webui\modules\launch_utils.py", line 386, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check 请按任意键继续. . .

have use --use-directml or --use-zluda to instead --skip-torch-cuda-test。but did not work

Steps to reproduce the problem

1、follow install 2、 .\webui-user.bat,WEB UI can exit,but it tell me I do not have N card and N card driver

What should have happened?

1、StableDiffusion do not work on Windows10 + AMD6500XT

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

1、.\webui-user.bat has been failed

Console logs

PS D:\GitResource\stable-diffusion-webui> .\webui-user.bat
venv "D:\GitResource\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Traceback (most recent call last):
  File "D:\GitResource\stable-diffusion-webui\launch.py", line 48, in <module>
    main()
  File "D:\GitResource\stable-diffusion-webui\launch.py", line 39, in main
    prepare_environment()
  File "D:\GitResource\stable-diffusion-webui\modules\launch_utils.py", line 386, in prepare_environment
    raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
请按任意键继续. . .

Additional information

AMD card 6500XT

LYC878484 commented 3 months ago

use --use-zluda ,error log:

PS D:\GitResource\stable-diffusion-webui> .\webui-user.bat venv "D:\GitResource\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.8.0 Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 Launching Web UI with arguments: --use-zluda --skip-torch-cuda-test --opt-sub-quad-attention --lowvram --disable-nan-check WARNING:sgm.modules.diffusionmodules.model:no module 'xformers'. Processing without... WARNING:sgm.modules.attention:no module 'xformers'. Processing without... usage: launch.py [-h] [--update-all-extensions] [--skip-python-version-check] [--skip-torch-cuda-test] [--reinstall-xformers] [--reinstall-torch] [--update-check] [--test-server] [--log-startup] [--skip-prepare-environment] [--skip-install] [--dump-sysinfo] [--loglevel LOGLEVEL] [--do-not-download-clip] [--data-dir DATA_DIR] [--config CONFIG] [--ckpt CKPT] [--ckpt-dir CKPT_DIR] [--vae-dir VAE_DIR] [--gfpgan-dir GFPGAN_DIR] [--gfpgan-model GFPGAN_MODEL] [--no-half] [--no-half-vae] [--no-progressbar-hiding] [--max-batch-count MAX_BATCH_COUNT] [--embeddings-dir EMBEDDINGS_DIR]
[--textual-inversion-templates-dir TEXTUAL_INVERSION_TEMPLATES_DIR] [--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code] [--medvram] [--medvram-sdxl] [--lowvram]
[--lowram] [--always-batch-cond-uncond] [--unload-gfpgan] [--precision {full,autocast}] [--upcast-sampling] [--share] [--ngrok NGROK] [--ngrok-region NGROK_REGION] [--ngrok-options NGROK_OPTIONS] [--enable-insecure-extension-access] [--codeformer-models-path CODEFORMER_MODELS_PATH] [--gfpgan-models-path GFPGAN_MODELS_PATH] [--esrgan-models-path ESRGAN_MODELS_PATH] [--bsrgan-models-path BSRGAN_MODELS_PATH] [--realesrgan-models-path REALESRGAN_MODELS_PATH] [--dat-models-path DAT_MODELS_PATH] [--clip-models-path CLIP_MODELS_PATH] [--xformers] [--force-enable-xformers] [--xformers-flash-attention] [--deepdanbooru] [--opt-split-attention] [--opt-sub-quad-attention] [--sub-quad-q-chunk-size SUB_QUAD_Q_CHUNK_SIZE] [--sub-quad-kv-chunk-size SUB_QUAD_KV_CHUNK_SIZE] [--sub-quad-chunk-threshold SUB_QUAD_CHUNK_THRESHOLD] [--opt-split-attention-invokeai] [--opt-split-attention-v1] [--opt-sdp-attention] [--opt-sdp-no-mem-attention] [--disable-opt-split-attention]
[--disable-nan-check] [--use-cpu USE_CPU [USE_CPU ...]] [--use-ipex] [--disable-model-loading-ram-optimization] [--listen] [--port PORT] [--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE] [--hide-ui-dir-config] [--freeze-settings] [--freeze-settings-in-sections FREEZE_SETTINGS_IN_SECTIONS] [--freeze-specific-settings FREEZE_SPECIFIC_SETTINGS] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug] [--gradio-auth GRADIO_AUTH] [--gradio-auth-path GRADIO_AUTH_PATH] [--gradio-img2img-tool GRADIO_IMG2IMG_TOOL] [--gradio-inpaint-tool GRADIO_INPAINT_TOOL] [--gradio-allowed-path GRADIO_ALLOWED_PATH] [--opt-channelslast] [--styles-file STYLES_FILE] [--autolaunch] [--theme THEME] [--use-textbox-seed] [--disable-console-progressbars] [--enable-console-prompts] [--vae-path VAE_PATH] [--disable-safe-unpickle] [--api] [--api-auth API_AUTH] [--api-log] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID] [--administrator] [--cors-allow-origins CORS_ALLOW_ORIGINS] [--cors-allow-origins-regex CORS_ALLOW_ORIGINS_REGEX] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--disable-tls-verify] [--server-name SERVER_NAME] [--gradio-queue] [--no-gradio-queue]
[--skip-version-check] [--no-hashing] [--no-download-sd-model] [--subpath SUBPATH] [--add-stop-route] [--api-server-stop] [--timeout-keep-alive TIMEOUT_KEEP_ALIVE] [--disable-all-extensions] [--disable-extra-extensions] [--skip-load-model-at-start] [--ldsr-models-path LDSR_MODELS_PATH] [--lora-dir LORA_DIR] [--lyco-dir-backcompat LYCO_DIR_BACKCOMPAT] [--scunet-models-path SCUNET_MODELS_PATH] [--swinir-models-path SWINIR_MODELS_PATH] launch.py: error: unrecognized arguments: --use-zluda 请按任意键继续. . .

lshqqytiger commented 3 months ago

You are on the upstream repository, not stable-diffusion-webui-directml fork.

LYC878484 commented 3 months ago

You are on the upstream repository, not stable-diffusion-webui-directml fork.

now my path is D:\GitResource\stable-diffusion-webui-directml

PS D:\GitResource\stable-diffusion-webui-directml> .\webui-user.bat venv "D:\GitResource\stable-diffusion-webui-directml\venv\Scripts\Python.exe" LYC use directml. fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.8.0-RC Commit hash: 25a3b6cbeea8a07afd5e4594afc2f1c79f41ac1a no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead. rank_zero_deprecation( Launching Web UI with arguments: --use-directml --skip-torch-cuda-test --opt-sub-quad-attention --lowvram --disable-nan-check DirectML initialization failed: No module named 'torch_directml' Traceback (most recent call last): File "D:\GitResource\stable-diffusion-webui-directml\launch.py", line 48, in main() File "D:\GitResource\stable-diffusion-webui-directml\launch.py", line 44, in main start() File "D:\GitResource\stable-diffusion-webui-directml\modules\launch_utils.py", line 665, in start import webui File "D:\GitResource\stable-diffusion-webui-directml\webui.py", line 13, in initialize.imports() File "D:\GitResource\stable-diffusion-webui-directml\modules\initialize.py", line 36, in imports shared_init.initialize() File "D:\GitResource\stable-diffusion-webui-directml\modules\shared_init.py", line 31, in initialize directml_do_hijack() File "D:\GitResource\stable-diffusion-webui-directml\modules\dml__init.py", line 76, in directml_do_hijack if not torch.dml.has_float64_support(device): File "D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\torch\init.py", line 1932, in getattr raise AttributeError(f"module '{name__}' has no attribute '{name}'") AttributeError: module 'torch' has no attribute 'dml' 请按任意键继续. . . 终止批处理操作吗(Y/N)? N

torch-directml is not auto download

LYC878484 commented 3 months ago

rm venv, and then process webui-user.bat, log is here, my question is (1) is it matter that Failed to automatically patch torch with ZLUDA. Could not find ZLUDA from PATH., (2)You are running torch 2.0.0+cpu.,why is not AMD GPU

PS D:\GitResource\stable-diffusion-webui-directml> .\webui-user.bat Creating venv in directory D:\GitResource\stable-diffusion-webui-directml\venv using python "D:\Program Files\Python\Python310\python.exe" venv "D:\GitResource\stable-diffusion-webui-directml\venv\Scripts\Python.exe" LYC use directml. fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.8.0-RC Commit hash: 25a3b6cbeea8a07afd5e4594afc2f1c79f41ac1a Installing torch and torchvision Collecting torch==2.0.0 Downloading torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 172.3/172.3 MB 1.5 MB/s eta 0:00:00 Collecting torchvision==0.15.1 Downloading torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 6.9 MB/s eta 0:00:00 Collecting torch-directml Downloading torch_directml-0.2.0.dev230426-cp310-cp310-win_amd64.whl (8.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.2/8.2 MB 1.8 MB/s eta 0:00:00 Collecting jinja2 Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Collecting networkx Downloading networkx-3.2.1-py3-none-any.whl (1.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 2.6 MB/s eta 0:00:00 Collecting sympy Downloading sympy-1.12-py3-none-any.whl (5.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 1.7 MB/s eta 0:00:00 Collecting typing-extensions Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB) Collecting filelock Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Collecting numpy Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Collecting pillow!=8.3.*,>=5.3.0 Downloading pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.6/2.6 MB 3.3 MB/s eta 0:00:00 Collecting requests Using cached requests-2.31.0-py3-none-any.whl (62 kB) Collecting MarkupSafe>=2.0 Using cached MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB) Collecting idna<4,>=2.5 Using cached idna-3.6-py3-none-any.whl (61 kB) Collecting charset-normalizer<4,>=2 Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) Collecting certifi>=2017.4.17 Using cached certifi-2024.2.2-py3-none-any.whl (163 kB) Collecting urllib3<3,>=1.21.1 Using cached urllib3-2.2.1-py3-none-any.whl (121 kB) Collecting mpmath>=0.19 Downloading mpmath-1.3.0-py3-none-any.whl (536 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 1.2 MB/s eta 0:00:00 Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torch-directml Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.0.0 torch-directml-0.2.0.dev230426 torchvision-0.15.1 typing-extensions-4.10.0 urllib3-2.2.1

[notice] A new release of pip available: 22.2.1 -> 24.0 [notice] To update, run: D:\GitResource\stable-diffusion-webui-directml\venv\Scripts\python.exe -m pip install --upgrade pip Failed to automatically patch torch with ZLUDA. Could not find ZLUDA from PATH. Installing clip Installing open_clip Installing requirements Installing onnxruntime-directml no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead. rank_zero_deprecation( Launching Web UI with arguments: --use-directml --skip-torch-cuda-test --opt-sub-quad-attention --lowvram --disable-nan-check ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']

You are running torch 2.0.0+cpu. The program is tested to work with torch 2.1.2. To reinstall the desired version, run with commandline flag --reinstall-torch. Beware that this will cause a lot of large files to be downloaded, as well as there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.

Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to D:\GitResource\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors

2%|██▎ | 81.8M/3.97G [00:56<1:23:29, 834kB/s]

lshqqytiger commented 3 months ago

Make sure what you want. DirectML? or ZLUDA? For DirectML, it is not a problem there's cpu torch because torch-directml will be used as an external backend module. For ZLUDA, there should be cuda torch.

LYC878484 commented 3 months ago

I use DirectML, thanks , WEBUI can run, but modle not add success, error like this, is it model error? safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

PS D:\GitResource\stable-diffusion-webui-directml> .\webui-user.bat venv "D:\GitResource\stable-diffusion-webui-directml\venv\Scripts\Python.exe" LYC use directml. fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.8.0-RC Commit hash: 25a3b6cbeea8a07afd5e4594afc2f1c79f41ac1a no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead. rank_zero_deprecation( Launching Web UI with arguments: --use-directml --skip-torch-cuda-test --opt-sub-quad-attention --lowvram --disable-nan-check ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']

You are running torch 2.0.0+cpu. The program is tested to work with torch 2.1.2. To reinstall the desired version, run with commandline flag --reinstall-torch. Beware that this will cause a lot of large files to be downloaded, as well as there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.

Loading weights [fdd6639133] from D:\GitResource\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors loading stable diffusion model: SafetensorError Traceback (most recent call last): File "D:\Program Files\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "D:\Program Files\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "D:\Program Files\Python\Python310\lib\threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "D:\GitResource\stable-diffusion-webui-directml\modules\initialize.py", line 148, in load_model shared.sd_model # noqa: B018 File "D:\GitResource\stable-diffusion-webui-directml\modules\shared_items.py", line 148, in sd_model return modules.sd_models.model_data.get_sd_model() File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 627, in get_sd_model load_model() File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 723, in load_model state_dict = get_checkpoint_state_dict(checkpoint_info, timer) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 337, in get_checkpoint_state_dict res = read_state_dict(checkpoint_info.filename) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 311, in read_state_dict pl_sd = safetensors.torch.load_file(checkpoint_file, device=device) File "D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

Stable diffusion model failed to load Applying attention optimization: sub-quadratic... done. Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 2.2s (prepare environment: 4.3s, initialize shared: 0.8s, load scripts: 0.6s, create ui: 0.4s, gradio launch: 0.2s). Loading weights [fdd6639133] from D:\GitResource\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors loading stable diffusion model: SafetensorError Traceback (most recent call last): File "D:\Program Files\Python\Python310\lib\threading.py", line 973, in _bootstrap self._bootstrap_inner() File "D:\Program Files\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, **kwargs) File "D:\GitResource\stable-diffusion-webui-directml\modules\ui.py", line 1796, in visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit" File "D:\GitResource\stable-diffusion-webui-directml\modules\shared_items.py", line 148, in sd_model return modules.sd_models.model_data.get_sd_model() File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 627, in get_sd_model load_model() File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 723, in load_model state_dict = get_checkpoint_state_dict(checkpoint_info, timer) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 337, in get_checkpoint_state_dict res = read_state_dict(checkpoint_info.filename) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 311, in read_state_dict pl_sd = safetensors.torch.load_file(checkpoint_file, device=device) File "D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

Stable diffusion model failed to load Loading weights [fdd6639133] from D:\GitResource\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors [fdd6639133]: SafetensorError Traceback (most recent call last): File "D:\GitResource\stable-diffusion-webui-directml\modules\options.py", line 165, in set option.onchange() File "D:\GitResource\stable-diffusion-webui-directml\modules\call_queue.py", line 13, in f res = func(*args, kwargs) File "D:\GitResource\stable-diffusion-webui-directml\modules\initialize_util.py", line 174, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 888, in reload_model_weights state_dict = get_checkpoint_state_dict(checkpoint_info, timer) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 337, in get_checkpoint_state_dict res = read_state_dict(checkpoint_info.filename) File "D:\GitResource\stable-diffusion-webui-directml\modules\sd_models.py", line 311, in read_state_dict pl_sd = safetensors.torch.load_file(checkpoint_file, device=device) File "D:\GitResource\stable-diffusion-webui-directml\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer**

lshqqytiger commented 3 months ago

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10199

LYC878484 commented 3 months ago

AUTOMATIC1111#10199 thanks, it work