AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.54k stars 26.01k forks source link

[Bug]:Issue after install new version of webui #14859

Open Infamousfish opened 5 months ago

Infamousfish commented 5 months ago

Checklist

What happened?

after installing webui I get an error which doesn't let me use it, I had the old webui which was working fine but I decide to do a fresh install and it's not working anymore Screenshot 2024-02-07 at 15-28-05 Stable Diffusion Screenshot 2024-02-07 at 15-30-46 Stable Diffusion

Steps to reproduce the problem

  1. went to AUTOMATIC1111 github
  2. went to Installation and Running and clicked amd
  3. copy and pasted the git clone in cmd
  4. added --skip-torch-cuda-test to webui-user.bat to get it to launch
  5. got to the webui and got the error

What should have happened?

For it to launch and work like before

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

Internal Server Error

Console logs

venv "C:\G\sd\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: 1.7.0
Commit hash: adaea46e1c19d9a7091f89b0a7c6e66dfa732528
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --medvram --disable-nan-check --autolaunch --skip-torch-cuda-test
Style database not found: C:\G\sd\stable-diffusion-webui-directml\styles.csv
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "C:\G\sd\stable-diffusion-webui-directml\modules\shared_items.py", line 143, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 537, in get_sd_model
    load_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 608, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 230, in select_checkpoint
    raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
 - file C:\G\sd\stable-diffusion-webui-directml\model.ckpt
 - directory C:\G\sd\stable-diffusion-webui-directml\models\Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.

Stable diffusion model failed to load
Applying attention optimization: sub-quadratic... done.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 2.9s (prepare environment: 6.3s, initialize shared: 1.3s, load scripts: 0.6s, create ui: 0.5s, gradio launch: 0.1s).
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 152, in jsonable_encoder
    data = dict(obj)
TypeError: '_abc._abc_data' object is not iterable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 157, in jsonable_encoder
    data = vars(obj)
TypeError: vars() argument must have __dict__ attribute

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
    await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 255, in app
    content = await serialize_response(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 152, in serialize_response
    return jsonable_encoder(response_content)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder
    encoded_value = jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 131, in jsonable_encoder
    jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder
    encoded_value = jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder
    encoded_value = jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 161, in jsonable_encoder
    return jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 161, in jsonable_encoder
    return jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder
    encoded_value = jsonable_encoder(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 160, in jsonable_encoder
    raise ValueError(errors) from e
ValueError: [TypeError("'_abc._abc_data' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks_textual_inversion.py", line 15, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 224, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\shared_items.py", line 143, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 537, in get_sd_model
    load_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 608, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 230, in select_checkpoint
    raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
 - file C:\G\sd\stable-diffusion-webui-directml\model.ckpt
 - directory C:\G\sd\stable-diffusion-webui-directml\models\Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.

Stable diffusion model failed to load
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks_textual_inversion.py", line 15, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 224, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\shared_items.py", line 143, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 537, in get_sd_model
    load_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 608, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 230, in select_checkpoint
    raise FileNotFoundError(error_message)
Traceback (most recent call last):
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
 - file C:\G\sd\stable-diffusion-webui-directml\model.ckpt
 - directory C:\G\sd\stable-diffusion-webui-directml\models\Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(

  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(

  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
Stable diffusion model failed to load
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
loading stable diffusion model: FileNotFoundError
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks_textual_inversion.py", line 15, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 224, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
Traceback (most recent call last):
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Admin\miniconda3\envs\Automatic1111_olive\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
Traceback (most recent call last):
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui.py", line 1816, in <lambda>
    visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit"
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\G\sd\stable-diffusion-webui-directml\modules\shared_items.py", line 143, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 537, in get_sd_model
    load_model()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 608, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sd_models.py", line 230, in select_checkpoint
    raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
 - file C:\G\sd\stable-diffusion-webui-directml\model.ckpt
 - directory C:\G\sd\stable-diffusion-webui-directml\models\Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)

  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)

  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
Stable diffusion model failed to load
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui_extra_networks_textual_inversion.py", line 15, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 224, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__
    await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\ui.py", line 1892, in download_sysinfo
    text = sysinfo.get()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sysinfo.py", line 49, in get
    res = get_dict()
  File "C:\G\sd\stable-diffusion-webui-directml\modules\sysinfo.py", line 75, in get_dict
    gpu = DeviceProperties(devices.device)
  File "C:\G\sd\stable-diffusion-webui-directml\modules\dml\device_properties.py", line 13, in __init__
    self.name = torch.dml.get_device_name(device)
  File "C:\G\sd\stable-diffusion-webui-directml\venv\lib\site-packages\torch\__init__.py", line 1932, in __getattr__
    raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
AttributeError: module 'torch' has no attribute 'dml'

Additional information

win11 6700xt

Dave-Swagten commented 5 months ago

I have the same problem on a new installation running win11 using 7900xtx

k0zer0g commented 5 months ago

same problem after installing the new version win11 rx580

dsthayl commented 5 months ago

Same error. Fresh install on a RX590, worked perfectly some months ago, 512x512 tooked only 15 or so seconds. Now cant even load settings or any model at all.

Infamousfish commented 5 months ago

I found a "solution" https://youtu.be/O40W-VOx5q0

roxas1212 commented 5 months ago

I found a "solution" https://youtu.be/O40W-VOx5q0

how to use this for google colab?

Infamousfish commented 5 months ago

did you tried to update python ? for the 3.10.9 version instead of 3.10.6 ? 2nd, did you tried to delete the virtual environement on the root of your a1111 folder ? please try this delete the "venv" folder and then click on webui to start a1111 as usual.

this will reinstall the VENV folders (no worries).

I appreciate your comment but I got to work eventually without doing any of what do you said but I couldn't use hires fix so I'm sticking to 1.6.1 for now until there's a new update

roxas1212 commented 5 months ago

did you tried to update python ? for the 3.10.9 version instead of 3.10.6 ? 2nd, did you tried to delete the virtual environement on the root of your a1111 folder ? please try this delete the "venv" folder and then click on webui to start a1111 as usual. this will reinstall the VENV folders (no worries).

I appreciate your comment but I got to work eventually without doing any of what do you said but I couldn't use hires fix so I'm sticking to 1.6.1 for now until there's a new update

How do I downgrade the version to 1.6.1

Infamousfish commented 5 months ago

did you tried to update python ? for the 3.10.9 version instead of 3.10.6 ? 2nd, did you tried to delete the virtual environement on the root of your a1111 folder ? please try this delete the "venv" folder and then click on webui to start a1111 as usual. this will reinstall the VENV folders (no worries).

I appreciate your comment but I got to work eventually without doing any of what do you said but I couldn't use hires fix so I'm sticking to 1.6.1 for now until there's a new update

How do I downgrade the version to 1.6.1

I linked a video above

roxas1212 commented 5 months ago

did you tried to update python ? for the 3.10.9 version instead of 3.10.6 ? 2nd, did you tried to delete the virtual environement on the root of your a1111 folder ? please try this delete the "venv" folder and then click on webui to start a1111 as usual. this will reinstall the VENV folders (no worries).

I appreciate your comment but I got to work eventually without doing any of what do you said but I couldn't use hires fix so I'm sticking to 1.6.1 for now until there's a new update

How do I downgrade the version to 1.6.1

I linked a video above

I use this https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb

When I run "git checkout 03eec1791be011e087985ae93c1f66315d5a250e"

It appears "fatal: reference is not a tree: 03eec1791be011e087985ae93c1f66315d5a250e"

roxas1212 commented 5 months ago

Do you mean to modify the code in it? Can you help me modify the ipynb file above? I'm not familiar with COLAB and reprogramming. I'm really clueless, thanks

Infamousfish commented 5 months ago

Nono here is the link to the correct repo : https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/5ef669de080814067961f28357256e8fe27544f4 No link available for 1.6.1 didn't found it. (No time il on the hurry right now) Juste DOWNLOAD it and not GIT CLONE :) cause if you git clone it it will install the last version (1.7.0) I tried the 1.7.0 and no problem using hiresfix. Envoyé à partir de Outlook pour Androidhttps://aka.ms/AAb9ysg ____ From: roxas1212 @.> Sent: Friday, February 9, 2024 3:19:06 PM To: AUTOMATIC1111/stable-diffusion-webui @.> Cc: EKKIVOK @.>; Comment @.> Subject: Re: [AUTOMATIC1111/stable-diffusion-webui] [Bug]:Issue after install new version of webui (Issue #14859) Do you mean to modify the code in it? Can you help me modify the ipynb file above? I'm not familiar with COLAB and reprogramming. I'm really clueless, thanks — Reply to this email directly, view it on GitHub<#14859 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A4TH3VYZBU2QJKDOHMBTFDDYSYV5VAVCNFSM6AAAAABC5YBOR6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZWGAYTGMJUGA. You are receiving this because you commented.Message ID: @.***>

Do you use amd or nvidia?

roxas1212 commented 5 months ago

Thank you for helping me, who is ignorant, I have successfully downgraded to 1.6.1 Although I haven't solved the problem that occurred in the older version. But I've been able to use it temporarily. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14854 If anyone can help me solve it please let me know. Because I always like to work on this version, thanks.

roxas1212 commented 5 months ago

It can generate links normally

Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Commit hash: e5be9d1a5cee6e189ae7b86e9b968c1bf74ea960
Installing requirements
Launching Web UI with arguments: --theme dark --share --gradio-debug --disable-safe-unpickle --no-half-vae --reinstall-xformers --enable-insecure-extension-access --opt- channelslast
2024-02-10 09:30:47.844581: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-02-10 09:30:47.844635: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-02-10 09:30:47.846096: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-02-10 09:30:47.854012: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-02-10 09:30:49.128411: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
No module 'xformers'. Proceeding without it.
================================================== ============================
You are running torch 1.13.1+cu117.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
================================================== ============================
Checkpoint darkSushiMixMix_darkerPruned.safetensors [fb44463063] not found; loading fallback v1-5-pruned-emaonly.safetensors [6ce0161689]
Loading weights [6ce0161689] from /content/test1/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /content/test1/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Couldn't find VAE named vae-ft-mse-840000-ema-pruned.ckpt; using None instead
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(2): badhandv4, EasyNegative
Model loaded in 3.8s (create model: 0.6s, apply weights to model: 0.7s, apply channels_last: 0.7s, apply half(): 0.4s, load VAE: 0.6s, move model to device: 0.6s).
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://b8399fba8ab3ba2f61.gradio.live

But when I enter the link, it says ERROR and doesn't work. error

roxas1212 commented 5 months ago

@roxas1212

Seems you run an extra outdated version of SD. since, extensions was updated a lot can create issues with this version. you are using python 3.10.12 wich is better to use 3.10.9. it seems to run on your cpu (extra slow) but you have Cuda installed (then it seems your systems can handle cuda, you have nvidia card right ?) Cause i see that your tensorflow is optimised for cpu wich he NEED to be optimised for GPU. (wich can handle torch 2.0, by the way..)

i see you run an old version of torch (1.13.1 and cuda 11.7) there was a lot of update since, toche must be updated. try the cmd args "--reinstall-torch".

for your models it seems you installed all of them on another hard drive right ? But in the configuration tab of your SD the adress of them are not correct.

try this arguments as a command on your Webui.bat :

--ckpt-dir "F:\SDXL Models" --lora-dir "J:\Fooocus_win64_2-1-25\Fooocus\models\lora XL"

(this my personal links, replace them by your own)

your outputs directory are wrong too onnx cannot have multiple HDD like : c:/, e:/ etc..... just use one Disk drive like C. For the other one, just use the default, or simple create another links on a the "image folder" (on windows) and then create the folder for each things. and then put the correct link into the UI.

And, Voila :)

I can't believe my old version is working again. Thank you so much.

mikedebian commented 1 month ago

I have the same issue on Arch Linux. The WebUI loads but nothing works and the image looks like the one in the original post. It errors out with the same error messages on the latest release. Even the settings page is weird and don't load properly. This is with a fresh install. Can't select any checkboxes and it all looks "off" (No I'm not suffering from the Chinese CSS bug).

After having checked out the commit that Infamousfish posted, it worked again, however this is quite an old version. I can't find any reference about this bug being worked on.