Open Bashlator opened 3 weeks ago
Hey, with DirectML its really hard to get SDXL/Pony models working as it requires a lot more vram than usual. The best option for you is to use the ZLUDA backend for the Webui. You can find an install guide here: https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides Follow the Automatic1111 with ZLUDA Guide.
Hey, with DirectML its really hard to get SDXL/Pony models working as it requires a lot more vram than usual. The best option for you is to use the ZLUDA backend for the Webui. You can find an install guide here: https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides Follow the Automatic1111 with ZLUDA Guide.
Did everything as in tutorial, now when i try to choose the model entire webui just crashes
venv "D:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-16-g4730df18
Commit hash: 4730df185b557f1453a0f5f79ffd1fa7b36aae54
ROCm: agents=['gfx1010:xnack-']
ROCm: version=6.1, using agent gfx1010:xnack-
ZLUDA support: experimental
Using ZLUDA in D:\stable-diffusion-webui-amdgpu.zluda
D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers__init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --medvram --lowram
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX: version=1.20.0 provider=CUDAExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Loading weights [67ab2fd8ec] from D:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Creating model from config: D:\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 14.3s (prepare environment: 20.6s, initialize shared: 1.0s, list SD models: 0.5s, load scripts: 0.7s, create ui: 0.5s, gradio launch: 0.5s).
creating model quickly: OSError
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file resolved_file = hf_hub_download( File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(*args, *kwargs) File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download return _hf_hub_download_to_cache_dir( File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir _raise_on_head_call_error(head_call_error, force_download, local_files_only) File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _raise_on_head_call_error raise head_call_error File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error metadata = get_hf_file_metadata( File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(args, **kwargs) File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata r = _request_wrapper( File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 277, in _request_wrapper response = _request_wrapper( File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 301, in _request_wrapper hf_raise_for_status(response) File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils_http.py", line 454, in hf_raise_for_status raise _format(RepositoryNotFoundError, message, response) from e huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6727d028-29f2248650f75c05602abfb0;109edd08-8add-4ce7-8177-24f4cd7bb744)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct repo_id
and repo_type
.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(self._args, self._kwargs)
File "D:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "D:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
sd_model = instantiate_from_config(sd_config.model, state_dict)
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
return constructor(params)
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in init
self.conditioner = instantiate_from_config(
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in init
embedder = instantiate_from_config(embconfig)
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "D:\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in init
self.transformer = CLIPTextModel.from_pretrained(version)
File "D:\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3506, in from_pretrained
resolved_config_file = cached_file(
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login
or by passing token=<your_token>
Failed to create model quickly; will retry using slow method.
D:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return unsafe_torch_load(filename, *args, **kwargs)
Loading weights [67ab2fd8ec] from D:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Для продолжения нажмите любую клавишу . . .
Hey, as you only have 16GB of RAM you need to increase the Windows Pagefile. Instructions here: https://www.tomshardware.com/news/how-to-manage-virtual-memory-pagefile-windows-10,36929.html Enable it only for the C drive and disable it for any other drive. Set it to customized: 16000 min and 24000max. Then restart the PC and try to load the model again.
Hey, as you only have 16GB of RAM you need to increase the Windows Pagefile. Instructions here: https://www.tomshardware.com/news/how-to-manage-virtual-memory-pagefile-windows-10,36929.html Enable it only for the C drive and disable it for any other drive. Set it to customized: 16000 min and 24000max. Then restart the PC and try to load the model again.
I don't even have 24000 left on my SSD. I guess I'll just use another model
Hey, as you only have 16GB of RAM you need to increase the Windows Pagefile. Instructions here: https://www.tomshardware.com/news/how-to-manage-virtual-memory-pagefile-windows-10,36929.html Enable it only for the C drive and disable it for any other drive. Set it to customized: 16000 min and 24000max. Then restart the PC and try to load the model again.
I don't even have 24000 left on my SSD. I guess I'll just use another model
Nevermind, even waifu diffusion isn't loading
venv "D:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-16-g4730df18
Commit hash: 4730df185b557f1453a0f5f79ffd1fa7b36aae54
ROCm: agents=['gfx1010:xnack-']
ROCm: version=6.1, using agent gfx1010:xnack-
ZLUDA support: experimental
Using ZLUDA in D:\stable-diffusion-webui-amdgpu.zluda
D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\timm\models\layers__init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --medvram --lowram
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX: version=1.20.0 provider=CUDAExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Loading weights [c76e0962bc] from D:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\wd-1-4-anime_e2.ckpt
D:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return unsafe_torch_load(filename, *args, **kwargs)
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 39.7s (prepare environment: 63.8s, initialize shared: 3.0s, list SD models: 0.2s, load scripts: 1.4s, create ui: 0.6s, gradio launch: 0.5s).
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, self._kwargs)
File "D:\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "D:\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 815, in load_model
checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models_config.py", line 125, in find_checkpoint_config
return guess_model_config_from_state_dict(state_dict, info.filename)
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models_config.py", line 98, in guess_model_config_from_state_dict
elif is_using_v_parameterization_for_sd2(sd):
File "D:\stable-diffusion-webui-amdgpu\modules\sd_models_config.py", line 67, in is_using_v_parameterization_for_sd2
out = (unet(x_test, torch.asarray([999], device=device), context=test_cond) - x_test).mean().cpu().item()
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(args, kwargs)
File "D:\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, kwargs)
File "D:\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(args, kwargs)
File "D:\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
x = layer(x)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(args, **kwargs)
File "D:\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 458, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
Stable diffusion model failed to load
Applying attention optimization: InvokeAI... done.
Loading weights [c76e0962bc] from D:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\wd-1-4-anime_e2.ckpt
D:\stable-diffusion-webui-amdgpu\modules\safe.py:156: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return unsafe_torch_load(filename, *args, kwargs)
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\vasil\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "D:\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, kwargs)
File "D:\stable-diffusion-webui-amdgpu\modules\ui.py", line 1740, in
Stable diffusion model failed to load
Loading weights [c76e0962bc] from D:\stable-diffusion-webui-amdgpu\models\Stable-diffusion\wd-1-4-anime_e2.ckpt
changing setting sd_model_checkpoint to wd-1-4-anime_e2.ckpt [c76e0962bc]: RuntimeError
Traceback (most recent call last):
File "D:\stable-diffusion-webui-amdgpu\modules\options.py", line 165, in set
option.onchange()
File "D:\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 14, in f
res = func(*args, kwargs)
File "D:\stable-diffusion-webui-amdgpu\modules\initialize_util.py", line 181, in
you have --lowram in there? How much RAM do you have and how much VRAM? Whats your GPU?
Checklist
What happened?
When trying to run any other model than Stable Diffusion 1.5 (Pony Diffusion V6XL) i get safetensor error
Steps to reproduce the problem
What should have happened?
WebUI should load the model
What browsers do you use to access the UI ?
Other
Sysinfo
sysinfo-2024-11-03-14-59.json
Console logs
Additional information
I have 16 GB of RAM and RX 5700 with 8 GB VRAM, also using directml