lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
5.38k stars 533 forks source link

[Bug]: Directml not working #570

Open ananosleep opened 4 months ago

ananosleep commented 4 months ago

Checklist

What happened?

txt2img failed with error: RuntimeError: Cannot set version_counter for inference tensor and TypeError: 'NoneType' object is not iterable

Steps to reproduce the problem

  1. launch webui with argument: --directml --skip-torch-cuda-test --all-in-fp16 (same error without --all-in-fp16)
  2. enter prompt and click generate
  3. generation failed

What should have happened?

just generate image as webui-directml

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-03-17-12-25.json

Console logs

venv "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --directml --skip-torch-cuda-test --all-in-fp16
Using directml with device:
Total VRAM 1024 MB, total RAM 16231 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Forcing FP16.
Set vram state to: LOW_VRAM
Device: privateuseone
VAE dtype: torch.float32
CUDA Stream Activated:  False
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
ControlNet preprocessor location: C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\models\ControlNetPreprocessor
Loading weights [a2c153a866] from C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\models\Stable-diffusion\Für-Alice.safetensors
2024-03-17 20:14:53,229 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.3s (prepare environment: 1.5s, import torch: 5.7s, import gradio: 1.2s, setup paths: 1.1s, initialize shared: 0.2s, other imports: 0.7s, load scripts: 3.7s, create ui: 0.7s, gradio launch: 0.5s).
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Model loaded in 5.2s (load weights from disk: 0.4s, forge load real models: 3.7s, calculate empty prompt: 1.0s).
Traceback (most recent call last):
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 875, in process_images_inner
    p.setup_conds()
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 1452, in setup_conds
    super().setup_conds()
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 510, in setup_conds
    self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 496, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\prompt_parser.py", line 188, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\sd_hijack_clip.py", line 276, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules_forge\forge_clip.py", line 20, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 224, in forward
    position_ids = self.position_ids[:, :seq_length]
RuntimeError: Cannot set version_counter for inference tensor
Cannot set version_counter for inference tensor
*** Error completing request
*** Arguments: ('task(4e0fy3komoqel99)', <gradio.routes.Request object at 0x000001AA32EABF70>, '', '', ['团子'], 20, 'Euler a', 1, 1, 5, 768, 432, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---
Traceback (most recent call last):
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 875, in process_images_inner
    p.setup_conds()
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 1452, in setup_conds
    super().setup_conds()
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 510, in setup_conds
    self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\processing.py", line 496, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\prompt_parser.py", line 188, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\sd_hijack_clip.py", line 276, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules_forge\forge_clip.py", line 20, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 224, in forward
    position_ids = self.position_ids[:, :seq_length]
RuntimeError: Cannot set version_counter for inference tensor
Cannot set version_counter for inference tensor
*** Error completing request
*** Arguments: ('task(fboldektc9pzn67)', <gradio.routes.Request object at 0x000001AA32A27C10>, '', '', ['团子'], 20, 'Euler a', 1, 1, 5, 768, 432, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\pc\Desktop\MyFile\AI\webui\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Additional information

No response

VeteranXT commented 3 months ago

Duplicate of #58

ArtisticMusician commented 3 months ago

I U believe I know why I used the command flag --directml and getting an error during install.

import ldm_patched.modules.model_management as model_management \webui\ldm_patched\modules\model_management.py", line 38, in import torch_directml ModuleNotFoundError: No module named 'torch_directml' Press any key to continue . . .