adieyal / sd-dynamic-prompts

A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
MIT License
1.95k stars 253 forks source link

"TypeError: BFloat16 is not supported on MPS" #740

Closed Enashka closed 3 months ago

Enashka commented 3 months ago

First off a huge congrats on this game-changer of a tool.

I'm still getting my feet wet but tried to set up a batch process to run all night with some wildcards:

Prompt: from the movie Dune, Dune/characters, in Dune/backgrounds, {2-3$$Dune/mood}, Dune/settings, {1-2$$Dune/lora}, masterpiece, best quality

I'm on a SDXL model, at 1024x1024 with a batch size of 100. MacBookPro 2021 M1 with 32g GPU (--skip-torch-cuda-test --upcast-sampling --no-half-vae --skip-version-check)

It gets quite GPU intensive, and after just a couple generations, GPU usage spikes more and more, and just as I notice a big spike, it stops and ""TypeError: BFloat16 is not supported on MPS""

Perhaps BFloat16 precision is indeed not supported on mac (i'm not an expert). Is there a way to avoid using it, or at least decrease GPU load, without having to decrease batch size?

Thanks in advance!

Here's the full output if needed:

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [01:22<00:00, 3.31s/it] Error completing request | 50/2500 [02:40<1:37:57, 2.40s/it] Arguments: ('task(ifoaltvs77fguv6)', <gradio.routes.Request object at 0x1bd2e39d0>, 'from the movie Dune, Dune/characters, in Dune/backgrounds, {2-3$$Dune/mood}, Dune/settings, {1-2$$Dune/lora}, masterpiece, best quality', 'ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, watermark, grainy, signature, cut off, draft, multiple hands, low contrast, low resolution, out of focus', [], 25, 'DPM++ 2M Karras', 100, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x1f5f21330>, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), True, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/call_queue.py", line 36, in f res = func(*args, *kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/txt2img.py", line 110, in txt2img processed = processing.process_images(p) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/processing.py", line 785, in process_images res = process_images_inner(p) File "/Users/Sentinel/Sites/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/processing.py", line 908, in process_images_inner p.setup_conds() File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/processing.py", line 1424, in setup_conds super().setup_conds() File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/processing.py", line 505, in setup_conds self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/processing.py", line 491, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/prompt_parser.py", line 188, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/sd_models_xl.py", line 32, in get_learned_conditioning c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else []) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/sd_hijack_clip.py", line 234, in forward z = self.process_tokens(tokens, multipliers) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/sd_hijack_clip.py", line 276, in process_tokens z = self.encode_with_transformers(tokens) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/sd_hijack_clip.py", line 354, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=self.wrapped.layer == "hidden") File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward return self.text_model( File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 740, in forward encoder_outputs = self.encoder( File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 654, in forward layer_outputs = encoder_layer( File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 383, in forward hidden_states, attn_weights = self.self_attn( File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 272, in forward query_states = self.q_proj(hidden_states) self.scale File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/modules/devices.py", line 164, in forward_wrapper result = self.org_forward(*args, **kwargs) File "/Users/Sentinel/Sites/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 498, in network_Linear_forward network_apply_weights(self) File "/Users/Sentinel/Sites/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 406, in network_apply_weights updown, ex_bias = module.calc_updown(weight) File "/Users/Sentinel/Sites/stable-diffusion-webui/extensions-builtin/Lora/network_lokr.py", line 40, in calc_updown w1 = self.w1.to(orig_weight.device) TypeError: BFloat16 is not supported on MPS

Enashka commented 3 months ago

Answering my own problem. The issue was with a corrupted lora.

Enashka commented 3 months ago

Answering my own problem. The issue was with a corrupted lora.