lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.69k stars 178 forks source link

[Bug]: --use-cpu all and --use-cpu sd, no longer works, webui don't start... #242

Closed StudioDUzes closed 4 months ago

StudioDUzes commented 11 months ago

Is there an existing issue for this?

What happened?

webui don't start with --no-half --use-cpu all

Steps to reproduce the problem

  1. --no-half --use-cpu all
  2. webui don't start

What should have happened?

webui start

Version or Commit where the problem happens

9a8a2a47f63c3d9b04c014a715f95d680f461963

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Intel ARC GPUs

Cross attention optimization

sdp

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --no-half --use-cpu all

git pull

call webui.bat

List of extensions

no

Console logs

Already up to date.
venv "L:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: 9a8a2a47f63c3d9b04c014a715f95d680f461963
Launching Web UI with arguments: --no-half --use-cpu all
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ L:\stable-diffusion-webui-directml\launch.py:39 in <module>                                      │
│                                                                                                  │
│   36                                                                                             │
│   37                                                                                             │
│   38 if __name__ == "__main__":                                                                  │
│ ❱ 39 │   main()                                                                                  │
│   40                                                                                             │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\launch.py:35 in main                                          │
│                                                                                                  │
│   32 │   if args.test_server:                                                                    │
│   33 │   │   configure_for_tests()                                                               │
│   34 │                                                                                           │
│ ❱ 35 │   start()                                                                                 │
│   36                                                                                             │
│   37                                                                                             │
│   38 if __name__ == "__main__":                                                                  │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\modules\launch_utils.py:443 in start                          │
│                                                                                                  │
│   440                                                                                            │
│   441 def start():                                                                               │
│   442 │   print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum   │
│ ❱ 443 │   import webui                                                                           │
│   444 │   if '--nowebui' in sys.argv:                                                            │
│   445 │   │   webui.api_only()                                                                   │
│   446 │   else:                                                                                  │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\webui.py:54 in <module>                                       │
│                                                                                                  │
│    51 startup_timer.record("import ldm")                                                         │
│    52                                                                                            │
│    53 from modules import extra_networks                                                         │
│ ❱  54 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, queue_lock  # noq   │
│    55                                                                                            │
│    56 # Truncate version number of nightly/local build of PyTorch to not cause exceptions with   │
│    57 if ".dev" in torch.__version__ or "+git" in torch.__version__:                             │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\modules\call_queue.py:6 in <module>                           │
│                                                                                                  │
│     3 import threading                                                                           │
│     4 import time                                                                                │
│     5                                                                                            │
│ ❱   6 from modules import shared, progress, errors                                               │
│     7                                                                                            │
│     8 queue_lock = threading.Lock()                                                              │
│     9                                                                                            │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\modules\shared.py:756 in <module>                             │
│                                                                                                  │
│   753 │   opts.load(config_filename)                                                             │
│   754                                                                                            │
│   755 if cmd_opts.backend == 'directml':                                                         │
│ ❱ 756 │   directml_do_hijack()                                                                   │
│   757                                                                                            │
│   758                                                                                            │
│   759 class Shared(sys.modules[__name__].__class__):                                             │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\modules\dml\__init__.py:64 in directml_do_hijack              │
│                                                                                                  │
│   61 │   import modules.dml.hijack                                                               │
│   62 │   from modules.devices import device                                                      │
│   63 │                                                                                           │
│ ❱ 64 │   if not torch.dml.has_float64_support(device):                                           │
│   65 │   │   CondFunc('torch.from_numpy',                                                        │
│   66 │   │   │   lambda orig_func, *args, **kwargs: orig_func(args[0].astype('float32')),        │
│   67 │   │   │   lambda *args, **kwargs: args[1].dtype == float)                                 │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\modules\dml\backend.py:40 in has_float64_support              │
│                                                                                                  │
│   37 │   │   return device.type == "privateuseone"                                               │
│   38 │                                                                                           │
│   39 │   def has_float64_support(device: Optional[rDevice]=None) -> bool:                        │
│ ❱ 40 │   │   return torch_directml.has_float64_support(get_device(device).index)                 │
│   41 │                                                                                           │
│   42 │   def device_count() -> int:                                                              │
│   43 │   │   return torch_directml.device_count()                                                │
│                                                                                                  │
│ L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch_directml\device.py:44 in         │
│ has_float64_support                                                                              │
│                                                                                                  │
│   41 │   torch_directml_native.disable_tiled_resources(is_disabled)                              │
│   42                                                                                             │
│   43 def has_float64_support(device_id = default_device()):                                      │
│ ❱ 44 │   return torch_directml_native.has_float64_support(device_id)                             │
│   45                                                                                             │
│   46 def gpu_memory(device_id = default_device(), mb_per_tile = 1):                              │
│   47 │   return torch_directml_native.get_gpu_memory(device_id, mb_per_tile)                     │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: has_float64_support(): incompatible function arguments. The following argument types are supported:
    1. (arg0: int) -> bool

Invoked with: None
Appuyez sur une touche pour continuer...

Additional information

No response