lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
5.39k stars 540 forks source link

TypeError: 'NoneType' object is not iterable, CUDA error: invalid argument #594

Open Gamination opened 3 months ago

Gamination commented 3 months ago

Checklist

What happened?

cannot generate any images as model cannot be loaded

Steps to reproduce the problem

  1. Download Feb 5 one click installation package
  2. Run update.bat
  3. Run run.bat
  4. Use some model
  5. Try to generate an image

What should have happened?

Model should generate image

What browsers do you use to access the UI ?

Google Chrome, Microsoft Edge

Sysinfo

sysinfo-2024-03-21-18-19.json

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments:
Total VRAM 6128 MB, total RAM 32122 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon RX 5600M [ZLUDA] : native
VAE dtype: torch.bfloat16
CUDA Stream Activated:  False
Using pytorch cross attention
ControlNet preprocessor location: D:\AI\StableDiffusion\New folder (2)\webui\models\ControlNetPreprocessor
Loading weights [6ce0161689] from D:\AI\StableDiffusion\New folder (2)\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
2024-03-21 23:51:49,199 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 24.2s (prepare environment: 6.0s, import torch: 7.3s, import gradio: 2.0s, setup paths: 1.7s, initialize shared: 0.2s, other imports: 1.3s, load scripts: 3.9s, create ui: 1.1s, gradio launch: 0.5s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6001.9990234375
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  4523.791400909424
Moving model(s) has taken 0.21 seconds
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "D:\AI\StableDiffusion\New folder (2)\webui\launch.py", line 51, in <module>
    main()
  File "D:\AI\StableDiffusion\New folder (2)\webui\launch.py", line 47, in main
    start()
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\launch_utils.py", line 549, in start
    main_thread.loop()
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_models.py", line 509, in get_sd_model
    load_model()
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_models.py", line 614, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_models.py", line 539, in get_empty_cond
    return sd_model.cond_stage_model([""])
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_hijack_clip.py", line 276, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules_forge\forge_clip.py", line 20, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_hijack.py", line 177, in forward
    inputs_embeds = self.wrapped(input_ids)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
    return F.embedding(
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Stable diffusion model failed to load
Loading weights [6ce0161689] from D:\AI\StableDiffusion\New folder (2)\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
model_type EPS
UNet ADM Dimension 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  5686.29052734375
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  4208.082904815674
Moving model(s) has taken 0.16 seconds
Traceback (most recent call last):
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\processing.py", line 741, in process_images
    sd_models.reload_model_weights()
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_models.py", line 628, in reload_model_weights
    return load_model(info)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_models.py", line 614, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_models.py", line 539, in get_empty_cond
    return sd_model.cond_stage_model([""])
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_hijack_clip.py", line 276, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules_forge\forge_clip.py", line 20, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\webui\modules\sd_hijack.py", line 177, in forward
    inputs_embeds = self.wrapped(input_ids)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
    return F.embedding(
  File "D:\AI\StableDiffusion\New folder (2)\system\python\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

*** Error completing request
*** Arguments: ('task(yha55n0i1e8svza)', <gradio.routes.Request object at 0x0000022E87717BE0>, 'this is a prompt haha', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\AI\StableDiffusion\New folder (2)\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Additional information

I updated my AMD GPU driver to 24.2.1 after observing that 24.1.1 is not working. https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/390 another similar issue not solved before

Bocchi-Chan2023 commented 3 months ago

https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/441#issuecomment-2008615931

Note that it still takes more than 10 minutes to generate the first

Gamination commented 3 months ago

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments:
Total VRAM 6128 MB, total RAM 32122 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon RX 5600M [ZLUDA] : native
VAE dtype: torch.bfloat16
CUDA Stream Activated:  False
Using pytorch cross attention
ControlNet preprocessor location: D:\AI\StableDiffusion\New folder (2)\webui\models\ControlNetPreprocessor
Loading weights [6ce0161689] from D:\AI\StableDiffusion\New folder (2)\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
2024-03-23 20:24:09,377 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 28.8s (prepare environment: 5.3s, import torch: 10.7s, import gradio: 2.4s, setup paths: 2.9s, initialize shared: 0.3s, other imports: 1.4s, load scripts: 3.7s, create ui: 1.1s, gradio launch: 0.8s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6001.9990234375
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  4523.791400909424
Moving model(s) has taken 0.17 seconds

rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1010

rocBLAS error: Could not initialize Tensile host:
regex_error(error_backref): The expression contained an invalid back reference.
Press any key to continue . . .

now it gives me this error