lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.8k stars 186 forks source link

[Bug]: RuntimeError: ADL2: Failed to get MemoryInfo2 #221

Closed Jekc1001 closed 7 months ago

Jekc1001 commented 1 year ago

Is there an existing issue for this?

What happened?

after the update it started throwing an error

remote: Enumerating objects: 27, done. remote: Counting objects: 100% (27/27), done. remote: Compressing objects: 100% (22/22), done. remote: Total 27 (delta 7), reused 20 (delta 5), pack-reused 0 Unpacking objects: 100% (27/27), 41.91 KiB | 26.00 KiB/s, done. From https://github.com/lshqqytiger/stable-diffusion-webui-directml 4873e6aa..69c8faea master -> origin/master Updating 4873e6aa..69c8faea Fast-forward modules/dml/init.py | 40 ++++++++-- modules/dml/backend.py | 46 ++++-------- modules/dml/memctl/amd/init.py | 8 -- modules/dml/memctl/intel/init.py | 6 -- modules/dml/memctl/memctl.py | 8 -- modules/dml/memctl/nvidia/init.py | 6 -- modules/dml/memctl/unknown/init.py | 5 -- modules/dml/memory.py | 31 ++++++++ modules/dml/memory_amd/init.py | 7 ++ .../{memctl/amd => memory_amd}/driver/atiadlxx.py | 0 .../amd => memory_amd}/driver/atiadlxx_apis.py | 0 .../amd => memory_amd}/driver/atiadlxx_defines.py | 0 .../driver/atiadlxx_structures.py | 0 modules/dml/pdh/init.py | 85 ++++++++++++++++++++++ modules/dml/pdh/apis.py | 36 +++++++++ modules/dml/pdh/defines.py | 22 ++++++ modules/dml/pdh/errors.py | 3 + modules/dml/pdh/msvcrt.py | 11 +++ modules/dml/pdh/structures.py | 41 +++++++++++ modules/memmon.py | 17 +++-- modules/shared.py | 10 +-- modules/ui.py | 2 + 22 files changed, 305 insertions(+), 79 deletions(-) delete mode 100644 modules/dml/memctl/amd/init.py delete mode 100644 modules/dml/memctl/intel/init.py delete mode 100644 modules/dml/memctl/memctl.py delete mode 100644 modules/dml/memctl/nvidia/init.py delete mode 100644 modules/dml/memctl/unknown/init.py create mode 100644 modules/dml/memory.py create mode 100644 modules/dml/memory_amd/init.py rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx.py (100%) rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx_apis.py (100%) rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx_defines.py (100%) rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx_structures.py (100%) create mode 100644 modules/dml/pdh/init.py create mode 100644 modules/dml/pdh/apis.py create mode 100644 modules/dml/pdh/defines.py create mode 100644 modules/dml/pdh/errors.py create mode 100644 modules/dml/pdh/msvcrt.py create mode 100644 modules/dml/pdh/structures.py venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.1 Commit hash: 69c8faeacb758cb825c4a4499dc50fa51a9a4cf3 Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\stable-diffusion-webui-directml\launch.py:39 in │ │ │ │ 36 │ │ 37 │ │ 38 if name == "main": │ │ ❱ 39 │ main() │ │ 40 │ │ │ │ D:\stable-diffusion-webui-directml\launch.py:35 in main │ │ │ │ 32 │ if args.test_server: │ │ 33 │ │ configure_for_tests() │ │ 34 │ │ │ ❱ 35 │ start() │ │ 36 │ │ 37 │ │ 38 if name == "main": │ │ │ │ D:\stable-diffusion-webui-directml\modules\launch_utils.py:443 in start │ │ │ │ 440 │ │ 441 def start(): │ │ 442 │ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum │ │ ❱ 443 │ import webui │ │ 444 │ if '--nowebui' in sys.argv: │ │ 445 │ │ webui.api_only() │ │ 446 │ else: │ │ │ │ D:\stable-diffusion-webui-directml\webui.py:54 in │ │ │ │ 51 startup_timer.record("import ldm") │ │ 52 │ │ 53 from modules import extra_networks │ │ ❱ 54 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, queue_lock # noq │ │ 55 │ │ 56 # Truncate version number of nightly/local build of PyTorch to not cause exceptions with │ │ 57 if ".dev" in torch.version or "+git" in torch.version: │ │ │ │ D:\stable-diffusion-webui-directml\modules\call_queue.py:6 in │ │ │ │ 3 import threading │ │ 4 import time │ │ 5 │ │ ❱ 6 from modules import shared, progress, errors │ │ 7 │ │ 8 queue_lock = threading.Lock() │ │ 9 │ │ │ │ D:\stable-diffusion-webui-directml\modules\shared.py:72 in │ │ │ │ 69 if cmd_opts.olive: │ │ 70 │ cmd_opts.onnx = True │ │ 71 if cmd_opts.backend == "directml": │ │ ❱ 72 │ directml_init() │ │ 73 │ │ 74 │ │ 75 devices.device, devices.device_interrogate, devices.device_gfpgan, devices.device_esrgan │ │ │ │ D:\stable-diffusion-webui-directml\modules\dml__init__.py:38 in directml_init │ │ │ │ 35 │ torch.cuda.mem_get_info = torch.dml.mem_get_info │ │ 36 │ │ 37 def directml_init(): │ │ ❱ 38 │ from modules.dml.backend import DirectML # pylint: disable=ungrouped-imports │ │ 39 │ # Alternative of torch.cuda for DirectML. │ │ 40 │ torch.dml = DirectML │ │ 41 │ │ │ │ D:\stable-diffusion-webui-directml\modules\dml\backend.py:10 in │ │ │ │ 7 from .utils import rDevice, get_device │ │ 8 from .device import device │ │ 9 from .device_properties import DeviceProperties │ │ ❱ 10 from .memory_amd import AMDMemoryProvider │ │ 11 from .memory import MemoryProvider │ │ 12 │ │ 13 def amd_mem_get_info(device: Optional[rDevice]=None) -> tuple[int, int]: │ │ │ │ D:\stable-diffusion-webui-directml\modules\dml\memory_amd__init.py:3 in │ │ │ │ 1 from .driver.atiadlxx import ATIADLxx │ │ 2 │ │ ❱ 3 class AMDMemoryProvider: │ │ 4 │ driver: ATIADLxx = ATIADLxx() │ │ 5 │ def mem_get_info(index): │ │ 6 │ │ usage = AMDMemoryProvider.driver.get_dedicated_vram_usage(index) * (1 << 20) │ │ │ │ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\init.py:4 in AMDMemoryProvider │ │ │ │ 1 from .driver.atiadlxx import ATIADLxx │ │ 2 │ │ 3 class AMDMemoryProvider: │ │ ❱ 4 │ driver: ATIADLxx = ATIADLxx() │ │ 5 │ def mem_get_info(index): │ │ 6 │ │ usage = AMDMemoryProvider.driver.get_dedicated_vram_usage(index) * (1 << 20) │ │ 7 │ │ return (AMDMemoryProvider.driver.iHyperMemorySize - usage, AMDMemoryProvider.dri │ │ │ │ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\driver\atiadlxx.py:22 in init__ │ │ │ │ 19 │ │ │ if adapter.iBusNumber not in busNumbers: # filter duplicate device │ │ 20 │ │ │ │ self.devices.append(adapter) │ │ 21 │ │ │ │ busNumbers.append(adapter.iBusNumber) │ │ ❱ 22 │ │ self.iHyperMemorySize = self.get_memory_info2(0).iHyperMemorySize │ │ 23 │ │ │ 24 │ def get_memory_info2(self, adapterIndex: int) -> ADLMemoryInfo2: │ │ 25 │ │ info = ADLMemoryInfo2() │ │ │ │ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\driver\atiadlxx.py:28 in │ │ get_memory_info2 │ │ │ │ 25 │ │ info = ADLMemoryInfo2() │ │ 26 │ │ │ │ 27 │ │ if ADL2_Adapter_MemoryInfo2_Get(self.context, adapterIndex, C.byref(info)) != AD │ │ ❱ 28 │ │ │ raise RuntimeError("ADL2: Failed to get MemoryInfo2") │ │ 29 │ │ │ │ 30 │ │ return info │ │ 31 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: ADL2: Failed to get MemoryInfo2 Press any key to continue . . .

Steps to reproduce the problem

@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check

call git pull call webui.bat

What should have happened?

launch UI

Version or Commit where the problem happens

1.5.1

What Python version are you running on ?

None

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Other GPUs

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check 

call git pull 
call webui.bat

List of extensions

civitai-shortcut, sd-dynamic-thresholding, Stable-Diffusion-Webui-Civitai-Helper, stable-diffusion-webui-images-browser

Console logs

remote: Enumerating objects: 27, done.
remote: Counting objects: 100% (27/27), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 27 (delta 7), reused 20 (delta 5), pack-reused 0
Unpacking objects: 100% (27/27), 41.91 KiB | 26.00 KiB/s, done.
From https://github.com/lshqqytiger/stable-diffusion-webui-directml
   4873e6aa..69c8faea  master     -> origin/master
Updating 4873e6aa..69c8faea
Fast-forward
 modules/dml/__init__.py                            | 40 ++++++++--
 modules/dml/backend.py                             | 46 ++++--------
 modules/dml/memctl/amd/__init__.py                 |  8 --
 modules/dml/memctl/intel/__init__.py               |  6 --
 modules/dml/memctl/memctl.py                       |  8 --
 modules/dml/memctl/nvidia/__init__.py              |  6 --
 modules/dml/memctl/unknown/__init__.py             |  5 --
 modules/dml/memory.py                              | 31 ++++++++
 modules/dml/memory_amd/__init__.py                 |  7 ++
 .../{memctl/amd => memory_amd}/driver/atiadlxx.py  |  0
 .../amd => memory_amd}/driver/atiadlxx_apis.py     |  0
 .../amd => memory_amd}/driver/atiadlxx_defines.py  |  0
 .../driver/atiadlxx_structures.py                  |  0
 modules/dml/pdh/__init__.py                        | 85 ++++++++++++++++++++++
 modules/dml/pdh/apis.py                            | 36 +++++++++
 modules/dml/pdh/defines.py                         | 22 ++++++
 modules/dml/pdh/errors.py                          |  3 +
 modules/dml/pdh/msvcrt.py                          | 11 +++
 modules/dml/pdh/structures.py                      | 41 +++++++++++
 modules/memmon.py                                  | 17 +++--
 modules/shared.py                                  | 10 +--
 modules/ui.py                                      |  2 +
 22 files changed, 305 insertions(+), 79 deletions(-)
 delete mode 100644 modules/dml/memctl/amd/__init__.py
 delete mode 100644 modules/dml/memctl/intel/__init__.py
 delete mode 100644 modules/dml/memctl/memctl.py
 delete mode 100644 modules/dml/memctl/nvidia/__init__.py
 delete mode 100644 modules/dml/memctl/unknown/__init__.py
 create mode 100644 modules/dml/memory.py
 create mode 100644 modules/dml/memory_amd/__init__.py
 rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx.py (100%)
 rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx_apis.py (100%)
 rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx_defines.py (100%)
 rename modules/dml/{memctl/amd => memory_amd}/driver/atiadlxx_structures.py (100%)
 create mode 100644 modules/dml/pdh/__init__.py
 create mode 100644 modules/dml/pdh/apis.py
 create mode 100644 modules/dml/pdh/defines.py
 create mode 100644 modules/dml/pdh/errors.py
 create mode 100644 modules/dml/pdh/msvcrt.py
 create mode 100644 modules/dml/pdh/structures.py
venv "D:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: 69c8faeacb758cb825c4a4499dc50fa51a9a4cf3
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\stable-diffusion-webui-directml\launch.py:39 in <module>                                      │
│                                                                                                  │
│   36                                                                                             │
│   37                                                                                             │
│   38 if __name__ == "__main__":                                                                  │
│ ❱ 39 │   main()                                                                                  │
│   40                                                                                             │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\launch.py:35 in main                                          │
│                                                                                                  │
│   32 │   if args.test_server:                                                                    │
│   33 │   │   configure_for_tests()                                                               │
│   34 │                                                                                           │
│ ❱ 35 │   start()                                                                                 │
│   36                                                                                             │
│   37                                                                                             │
│   38 if __name__ == "__main__":                                                                  │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\launch_utils.py:443 in start                          │
│                                                                                                  │
│   440                                                                                            │
│   441 def start():                                                                               │
│   442 │   print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum   │
│ ❱ 443 │   import webui                                                                           │
│   444 │   if '--nowebui' in sys.argv:                                                            │
│   445 │   │   webui.api_only()                                                                   │
│   446 │   else:                                                                                  │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\webui.py:54 in <module>                                       │
│                                                                                                  │
│    51 startup_timer.record("import ldm")                                                         │
│    52                                                                                            │
│    53 from modules import extra_networks                                                         │
│ ❱  54 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, queue_lock  # noq   │
│    55                                                                                            │
│    56 # Truncate version number of nightly/local build of PyTorch to not cause exceptions with   │
│    57 if ".dev" in torch.__version__ or "+git" in torch.__version__:                             │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\call_queue.py:6 in <module>                           │
│                                                                                                  │
│     3 import threading                                                                           │
│     4 import time                                                                                │
│     5                                                                                            │
│ ❱   6 from modules import shared, progress, errors                                               │
│     7                                                                                            │
│     8 queue_lock = threading.Lock()                                                              │
│     9                                                                                            │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\shared.py:72 in <module>                              │
│                                                                                                  │
│    69 if cmd_opts.olive:                                                                         │
│    70 │   cmd_opts.onnx = True                                                                   │
│    71 if cmd_opts.backend == "directml":                                                         │
│ ❱  72 │   directml_init()                                                                        │
│    73                                                                                            │
│    74                                                                                            │
│    75 devices.device, devices.device_interrogate, devices.device_gfpgan, devices.device_esrgan   │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\dml\__init__.py:38 in directml_init                   │
│                                                                                                  │
│   35 │   torch.cuda.mem_get_info = torch.dml.mem_get_info                                        │
│   36                                                                                             │
│   37 def directml_init():                                                                        │
│ ❱ 38 │   from modules.dml.backend import DirectML # pylint: disable=ungrouped-imports            │
│   39 │   # Alternative of torch.cuda for DirectML.                                               │
│   40 │   torch.dml = DirectML                                                                    │
│   41                                                                                             │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\dml\backend.py:10 in <module>                         │
│                                                                                                  │
│    7 from .utils import rDevice, get_device                                                      │
│    8 from .device import device                                                                  │
│    9 from .device_properties import DeviceProperties                                             │
│ ❱ 10 from .memory_amd import AMDMemoryProvider                                                   │
│   11 from .memory import MemoryProvider                                                          │
│   12                                                                                             │
│   13 def amd_mem_get_info(device: Optional[rDevice]=None) -> tuple[int, int]:                    │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\__init__.py:3 in <module>              │
│                                                                                                  │
│   1 from .driver.atiadlxx import ATIADLxx                                                        │
│   2                                                                                              │
│ ❱ 3 class AMDMemoryProvider:                                                                     │
│   4 │   driver: ATIADLxx = ATIADLxx()                                                            │
│   5 │   def mem_get_info(index):                                                                 │
│   6 │   │   usage = AMDMemoryProvider.driver.get_dedicated_vram_usage(index) * (1 << 20)         │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\__init__.py:4 in AMDMemoryProvider     │
│                                                                                                  │
│   1 from .driver.atiadlxx import ATIADLxx                                                        │
│   2                                                                                              │
│   3 class AMDMemoryProvider:                                                                     │
│ ❱ 4 │   driver: ATIADLxx = ATIADLxx()                                                            │
│   5 │   def mem_get_info(index):                                                                 │
│   6 │   │   usage = AMDMemoryProvider.driver.get_dedicated_vram_usage(index) * (1 << 20)         │
│   7 │   │   return (AMDMemoryProvider.driver.iHyperMemorySize - usage, AMDMemoryProvider.dri     │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\driver\atiadlxx.py:22 in __init__      │
│                                                                                                  │
│   19 │   │   │   if adapter.iBusNumber not in busNumbers: # filter duplicate device              │
│   20 │   │   │   │   self.devices.append(adapter)                                                │
│   21 │   │   │   │   busNumbers.append(adapter.iBusNumber)                                       │
│ ❱ 22 │   │   self.iHyperMemorySize = self.get_memory_info2(0).iHyperMemorySize                   │
│   23 │                                                                                           │
│   24 │   def get_memory_info2(self, adapterIndex: int) -> ADLMemoryInfo2:                        │
│   25 │   │   info = ADLMemoryInfo2()                                                             │
│                                                                                                  │
│ D:\stable-diffusion-webui-directml\modules\dml\memory_amd\driver\atiadlxx.py:28 in               │
│ get_memory_info2                                                                                 │
│                                                                                                  │
│   25 │   │   info = ADLMemoryInfo2()                                                             │
│   26 │   │                                                                                       │
│   27 │   │   if ADL2_Adapter_MemoryInfo2_Get(self.context, adapterIndex, C.byref(info)) != AD    │
│ ❱ 28 │   │   │   raise RuntimeError("ADL2: Failed to get MemoryInfo2")                           │
│   29 │   │                                                                                       │
│   30 │   │   return info                                                                         │
│   31                                                                                             │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: ADL2: Failed to get MemoryInfo2
Press any key to continue . . .

Additional information

GPU RX 570 4gb

lshqqytiger commented 1 year ago

216

Change DirectML memory stats provider to Performance Counter or None in Settings -> Optimization. If webui does not launch, edit config.json like https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/219#issuecomment-1660672745