AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.67k stars 27.03k forks source link

[Bug]: Any idea how to fix broken LyCoris? ongoing problem - Apple Silicon #15929

Open optionalbased opened 6 months ago

optionalbased commented 6 months ago

Checklist

What happened?

Still, updated my A1111, LyCoris's would give me the error TypeError: BFloat16 is not supported on MPS. Year old version of A1111 never had this problem with the same LyCoris files.

Tried everything, uninstalling, reinstalling, patching down to an older version, deleting checkpoints and LyCoris's and redownloading them, launch commands, dicking with every possible setting or config, uninstalling extensions, reinstalling extensions, updating dependencies, etc etc

A1111 has basically been almost unusable for me for a while now because I use a ton of LyCoris Loras. I tried for about 14 hours combined to get it to work again with trouble shooting help from others.

M2 Max Macbook Pro 96 gigs of unified memory. MacOS ventura.

Steps to reproduce the problem

Install A1111 Use SDXL checkpoint Add a LyCoris based Lora to prompt. Generate. Error.

What should have happened?

WebUI should function normally.

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-05-26-00-53.json

Console logs

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on arlologue user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.1.0.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
[-] ADetailer initialized. version: 24.5.1, num models: 10
Loading weights [821aa5537f] from /Users/arlologue/stable-diffusion-webui/models/Stable-diffusion/autismmixSDXL_autismmixPony.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: /Users/arlologue/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Startup time: 7.8s (prepare environment: 0.1s, import torch: 2.6s, import gradio: 1.0s, setup paths: 1.1s, initialize shared: 0.1s, other imports: 1.3s, load scripts: 0.8s, create ui: 0.3s, gradio launch: 0.3s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 16.1s (load weights from disk: 0.5s, create model: 0.5s, apply weights to model: 14.3s, move model to device: 0.2s, calculate empty prompt: 0.4s).
Loading VAE weights specified in settings: /Users/arlologue/stable-diffusion-webui/models/VAE/sdxl_vae (2).safetensors
Applying attention optimization: sub-quadratic... done.
VAE weights loaded.
                                  Restoring base VAE
Applying attention optimization: sub-quadratic... done.
VAE weights loaded.
*** Error completing request
*** Arguments: ('task(ky271wjms4g02lg)', <gradio.routes.Request object at 0x2f73dd300>, ' <lora:diathorn_style_pony6_v1:1>, 1girl, clothed, score_9_up, score_8_up, score_7_up', ' score_6_up, score_5_up, score_4_up', [], 1, 1, 8.5, 1024, 1024, True, 0.07, 1.6, 'R-ESRGAN 4x+ Anime6B', 8, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 37, 'UniPC', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, 2, 'sdxl_vae (2).safetensors', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tap_enable': False, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/arlologue/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/arlologue/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/arlologue/stable-diffusion-webui/modules/processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "/Users/arlologue/stable-diffusion-webui/modules/processing.py", line 959, in process_images_inner
        p.setup_conds()
      File "/Users/arlologue/stable-diffusion-webui/modules/processing.py", line 1495, in setup_conds
        super().setup_conds()
      File "/Users/arlologue/stable-diffusion-webui/modules/processing.py", line 506, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "/Users/arlologue/stable-diffusion-webui/modules/processing.py", line 492, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "/Users/arlologue/stable-diffusion-webui/modules/prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "/Users/arlologue/stable-diffusion-webui/modules/sd_models_xl.py", line 32, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/modules/sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "/Users/arlologue/stable-diffusion-webui/extensions/sd-webui-prevent-artifact/scripts/pa.py", line 35, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "/Users/arlologue/stable-diffusion-webui/modules/sd_hijack_clip.py", line 354, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=self.wrapped.layer == "hidden")
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward
        return self.text_model(
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 383, in forward
        hidden_states, attn_weights = self.self_attn(
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 272, in forward
        query_states = self.q_proj(hidden_states) * self.scale
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/modules/devices.py", line 164, in forward_wrapper
        result = self.org_forward(*args, **kwargs)
      File "/Users/arlologue/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 501, in network_Linear_forward
        network_apply_weights(self)
      File "/Users/arlologue/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 406, in network_apply_weights
        updown, ex_bias = module.calc_updown(weight)
      File "/Users/arlologue/stable-diffusion-webui/extensions-builtin/Lora/network_lokr.py", line 40, in calc_updown
        w1 = self.w1.to(orig_weight.device)
    TypeError: BFloat16 is not supported on MPS

---

Additional information

Nothing about my computer software changed at all between the previous, working version, and the broken, updated version.

optionalbased commented 6 months ago

BTW - tried this on both Chrome and Firefox.

AG-w commented 6 months ago

you didn't enable bfloat16 right?