lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.45k stars 825 forks source link

xformers is not working on flux model #1019

Open rltgjqmcpgjadyd opened 3 months ago

rltgjqmcpgjadyd commented 3 months ago

torch: 2.4.0+cu124

xformers: 0.0.27.post2

If xformers are enabled when generating images with flux models, TypeError: 'NoneType' object is not iterable error occurs and images are not generated

For other SD models, the image is created normally

To create an image with a flux model, you need to uninstall xformers.

lintglitch commented 3 months ago

Thank you! I searched for some time why flux did not work for me. Once I removed xformers it worked fine for me.

My version: version: f2.0.1v1.10.1-previous-248-gf6ef105c  •  python: 3.10.6  •  torch: 2.3.1+cu121  •  xformers: 0.0.27.post2  •  gradio: 4.40.0  •  checkpoint: 275ef623d3

Here is my console log for the error:

To create a public link, set `share=True` in `launch()`.
Startup time: 26.7s (prepare environment: 18.5s, launcher: 1.4s, import torch: 2.9s, initialize shared: 0.3s, other imports: 0.6s, load scripts: 0.9s, create ui: 1.3s, gradio launch: 0.8s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Model selected: {'checkpoint_info': {'filename': 'H:\\SD\\stable-diffusion-webui-forge\\models\\Stable-diffusion\\flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'vae_filename': None, 'unet_storage_dtype': None}
Loading Model: {'checkpoint_info': {'filename': 'H:\\SD\\stable-diffusion-webui-forge\\models\\Stable-diffusion\\flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'vae_filename': None, 'unet_storage_dtype': None}
StateDict Keys: {'transformer': 2350, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16}
Model loaded in 2.1s (unload existing model: 0.1s, load state dict: 0.8s, forge model load: 1.1s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
To load target model ModuleDict
Begin to load 1 model
[Memory Management] Current Free GPU Memory: 9060.60 MB
[Memory Management] Required Model Memory: 5154.62 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 2881.98 MB
Moving model(s) has taken 8.25 seconds
Traceback (most recent call last):
  File "H:\SD\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "H:\SD\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "H:\SD\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
  File "H:\SD\stable-diffusion-webui-forge\modules\processing.py", line 799, in process_images
    res = process_images_inner(p)
  File "H:\SD\stable-diffusion-webui-forge\modules\processing.py", line 912, in process_images_inner
    p.setup_conds()
  File "H:\SD\stable-diffusion-webui-forge\modules\processing.py", line 1497, in setup_conds
    super().setup_conds()
  File "H:\SD\stable-diffusion-webui-forge\modules\processing.py", line 494, in setup_conds
    self.c = self.get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, total_steps, [self.cached_c], self.extra_network_data)
  File "H:\SD\stable-diffusion-webui-forge\modules\processing.py", line 463, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
  File "H:\SD\stable-diffusion-webui-forge\modules\prompt_parser.py", line 262, in get_multicond_learned_conditioning
    learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling)
  File "H:\SD\stable-diffusion-webui-forge\modules\prompt_parser.py", line 189, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\backend\diffusion_engine\flux.py", line 79, in get_learned_conditioning
    cond_t5 = self.text_processing_engine_t5(prompt)
  File "H:\SD\stable-diffusion-webui-forge\backend\text_processing\t5_engine.py", line 123, in __call__
    z = self.process_tokens([tokens], [multipliers])[0]
  File "H:\SD\stable-diffusion-webui-forge\backend\text_processing\t5_engine.py", line 134, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "H:\SD\stable-diffusion-webui-forge\backend\text_processing\t5_engine.py", line 60, in encode_with_transformers
    z = self.text_encoder(
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\backend\nn\t5.py", line 205, in forward
    return self.encoder(x, *args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\backend\nn\t5.py", line 186, in forward
    x, past_bias = l(x, mask, past_bias)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\backend\nn\t5.py", line 162, in forward
    x, past_bias = self.layer[0](x, mask, past_bias)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\backend\nn\t5.py", line 149, in forward
    output, past_bias = self.SelfAttention(self.layer_norm(x), mask=mask, past_bias=past_bias)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "H:\SD\stable-diffusion-webui-forge\backend\nn\t5.py", line 138, in forward
    out = attention_function(q, k * ((k.shape[-1] / self.num_heads) ** 0.5), v, self.num_heads, mask)
  File "H:\SD\stable-diffusion-webui-forge\backend\attention.py", line 314, in attention_xformers
    mask_out[:, :, :mask.shape[-1]] = mask
RuntimeError: The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0.  Target sizes: [1, 256, 256].  Tensor sizes: [64, 256, 256]
The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 0.  Target sizes: [1, 256, 256].  Tensor sizes: [64, 256, 256]
*** Error completing request
*** Arguments: ('task(ykskvjbavuwot6b)', <gradio.route_utils.Request object at 0x000001848B9C2080>, 'Astronaut in jungle', '', [], 1, 1, 1, 3.5, 1152, 896, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', None, 0, 20, 'Euler', 'Simple', False, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=None, batch_mask_gallery=None, generated_image=None, mask_image=None, mask_image_fg=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, image_fg=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 3, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', '', 0, '', '', 0, '', '', True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "H:\SD\stable-diffusion-webui-forge\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable