pamparamm / sd-perturbed-attention

Perturbed-Attention Guidance and Smoothed Energy Guidance for ComfyUI and SD Forge
MIT License
213 stars 14 forks source link

Error on Forge/reForge #21

Closed slashedstar closed 3 months ago

slashedstar commented 3 months ago

With SDXL, same error on a new reforge install and a new "old forge" install (from https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/849)

Traceback (most recent call last):
  File "E:\New folder\webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "E:\New folder\webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "E:\New folder\webui\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "E:\New folder\webui\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "E:\New folder\webui\modules\processing.py", line 922, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "E:\New folder\webui\modules\processing.py", line 1275, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "E:\New folder\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "E:\New folder\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "E:\New folder\webui\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "E:\New folder\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\New folder\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\New folder\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "E:\New folder\webui\modules_forge\forge_sampler.py", line 88, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "E:\New folder\webui\ldm_patched\modules\samplers.py", line 303, in sampling_function
    cfg_result = fn(args)
  File "E:\New folder\webui\extensions\sd-perturbed-attention\pag_nodes.py", line 180, in post_cfg_function
    (seg_cond_pred, _) = calc_cond_uncond_batch(model, cond, None, x, sigma, model_options)
  File "E:\New folder\webui\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "E:\New folder\webui\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\New folder\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 889, in forward
    h = forward_timestep_embed(self.middle_block, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "E:\New folder\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\New folder\webui\ldm_patched\ldm\modules\attention.py", line 620, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\New folder\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\New folder\webui\ldm_patched\ldm\modules\attention.py", line 447, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "E:\New folder\webui\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint
    return func(*inputs)
  File "E:\New folder\webui\ldm_patched\ldm\modules\attention.py", line 504, in _forward
    n = attn1_replace_patch[block_attn1](n, context_attn1, value_attn1, extra_options)
  File "E:\New folder\webui\extensions\sd-perturbed-attention\pag_utils.py", line 147, in seg_attention
    return attention(q, k, v, heads=heads, attn_precision=extra_options["attn_precision"])KeyError: 'attn_precision'
'attn_precision'
*** Error completing request
pamparamm commented 3 months ago

Should be fixed now