ArtVentureX / sd-webui-agent-scheduler

599 stars 60 forks source link

Error completing request? #252

Open designosis opened 1 month ago

designosis commented 1 month ago

Hey! Yesterday, after updating A1111 and extensions to the latest version, my scheduled batches began failing many times a day. Sometimes a batch goes through, sometimes crashes mid-batch. It's been working flawlessly until now, for months. Nothing else in my config has changed.

Basically, lets say I have 5 tasks queued, each with 30 images. One could successfully go through all 30, another could quit after 3 images (and then retry after all existing batches are run), another could fail after 15 images, etc. It generates an Error completing request message with the agent_scheduler.task_runner.FakeRequest object, as seen below:

INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 30 images in a total of 30 batches. *** Error completing request *** Arguments: ('task(8vuw7wv4sjzxum4)', <agent_scheduler.task_runner.FakeRequest object at 0x460ce2410>, '__PERSON__\nBREAK\n__EXPRESSION__,__EYES__, __HAIR__\nBREAK\n__OUTFIT__\nBREAK\n__LOCATION__\nBREAK\n{OverallDetail, |}(beautiful and aesthetic:1.2), realistic, highly detailed, high contrast, {soft cinematic light,|}official art, {<lora:LowRA:{0.4|0.6|0.8}> dark theme, ||} <lora:detail_slider_v4:{0.5|0.7|0.9|1.1|1.3}>', 'bad-image-v2-39000, bad-hands-5, badIrisNeg, extra limbs, missing limbs, floating limbs, (mutated hands and fingers:1.1), disconnected limbs, mutation, mutated, blurry, amputation, signature, artist name, monochrome, grayscale, illustration, painting, cartoon, sketch', [], 30, 1, 7, 512, 768, True, 0.35, 2, 'R-ESRGAN 4x+', 20, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', ['Clip skip: 2', 'Model hash: BEST/BeautifulTruth.safetensors [7eba3131c9]', 'VAE: vae-ft-mse-840000-ema-pruned.safetensors'], 0, 40, 'DPM++ 3M SDE', 'Exponential', False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.83, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0.0, 'ad_mask_max_ratio': 1.0, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7.0, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1.0, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1.0, 'ad_controlnet_guidance_start': 0.0, 'ad_controlnet_guidance_end': 1.0}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0.0, 'ad_mask_max_ratio': 1.0, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7.0, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1.0, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1.0, 'ad_controlnet_guidance_start': 0.0, 'ad_controlnet_guidance_end': 1.0}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, '', '', 0) {} Traceback (most recent call last): File "/Volumes/SD/modules/call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "/Volumes/SD/modules/txt2img.py", line 109, in txt2img processed = processing.process_images(p) File "/Volumes/SD/modules/processing.py", line 845, in process_images res = process_images_inner(p) File "/Volumes/SD/modules/processing.py", line 959, in process_images_inner p.setup_conds() File "/Volumes/SD/modules/processing.py", line 1495, in setup_conds super().setup_conds() File "/Volumes/SD/modules/processing.py", line 506, in setup_conds self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data) File "/Volumes/SD/modules/processing.py", line 492, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "/Volumes/SD/modules/prompt_parser.py", line 188, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "/Volumes/SD/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/modules/sd_hijack_clip.py", line 234, in forward z = self.process_tokens(tokens, multipliers) File "/Volumes/SD/modules/sd_hijack_clip.py", line 276, in process_tokens z = self.encode_with_transformers(tokens) File "/Volumes/SD/modules/sd_hijack_clip.py", line 331, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward return self.text_model( File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 740, in forward encoder_outputs = self.encoder( File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 654, in forward layer_outputs = encoder_layer( File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 383, in forward hidden_states, attn_weights = self.self_attn( File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 272, in forward query_states = self.q_proj(hidden_states) * self.scale File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Volumes/SD/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Volumes/SD/modules/devices.py", line 164, in forward_wrapper result = self.org_forward(*args, **kwargs) File "/Volumes/SD/extensions-builtin/Lora/networks.py", line 501, in network_Linear_forward network_apply_weights(self) File "/Volumes/SD/extensions-builtin/Lora/networks.py", line 406, in network_apply_weights updown, ex_bias = module.calc_updown(weight) File "/Volumes/SD/extensions-builtin/Lora/network_hada.py", line 30, in calc_updown w1a = self.w1a.to(orig_weight.device) TypeError: BFloat16 is not supported on MPS

My extensions include: adetailer, sd-dynamic-prompts, sd-webui-agent-scheduler, and SD-WebUI-BatchCheckpointPrompt.

Any idea what might be causing this? Thanks!