Closed yanchaoguo closed 9 months ago
Could you provide a Colab Notebook reproducing this error?
@zideliu
someone broke diffusers. i have tried version v0.20.0 and working
!pip install -q --upgrade torch diffusers==v0.20.0 transformers accelerate
latest diffusion version broken
I don't understand https://github.com/huggingface/diffusers/issues/5028#issuecomment-1720858385. What's not working?
I don't understand #5028 (comment). What's not working?
hope this help https://colab.research.google.com/drive/1KKi9R4aRZ3eE92wkPfUBTTAxPtDwqxiW
You should really create a separate issue thread for this and include the stack trace.
You should really create a separate issue thread for this and include the stack trace.
because the error is similar. i have search the error forst on issue page that's why i comment here
hacked_DownBlock2D_forward()
gotan unexpected keyword argument 'scale'
btw i just giving info to OP thta if he change to the old diffusers version the error will be solved
@yanchaoguo DId you figure out how to workaround or solve this issue? I am also blocked by the same issue.
Faced the same issue with StableDiffusionReferencePipeline
I guess it is due to the original forward method https://github.com/huggingface/diffusers/blame/c78ee143e9d3cb52147cbdcda13707d02f96961c/src/diffusers/models/unet_2d_blocks.py#L930 replaced by StableDiffusionXLReferencePipeline
and StableDiffusionReferencePipeline
now taking a scale for Lora.
facing same issues here when using StableDiffusionReferencePipeline
with diffusers==0.21.4, diffusers==0.20.0 worked fine
Found a workaround,
we can pass scale=None
as arg for both
hacked_DownBlock2D_forward()
and hacked_UpBlock2D_forward()
in
StableDiffusionReferencePipeline
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Bump, still having this on the latest version of diffusers (diffusers-0.26.1?) when trying a forward pass with a from scratch UNet2DConditionModel. UNet2DModel works though which I am immensely confused about.
Found a fix but the error message was not at all elucidating for the real cause. In my case it is because I was using attnblock2d when i meant to use crossattnblock2d. Switching the layer names solved my bug. If anyone gets this issue in the future check layer types.
Describe the bug
TypeError Traceback (most recent call last) Cell In[1], line 18 16 pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) 17 seed = torch.manual_seed(10240) ---> 18 result_img = pipe(ref_image=style_image, 19 prompt="1girl", 20 generator=seed, 21 num_inference_steps=20, 22 reference_attn=True, 23 reference_adain=True).images[0] 24 result_img
File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, *kwargs):
114 with ctx_factory():
--> 115 return func(args, kwargs)
File ~/.cache/huggingface/modules/diffusers_modules/git/stable_diffusion_xl_reference.py:738, in StableDiffusionXLReferencePipeline.call(self, prompt, prompt_2, ref_image, height, width, num_inference_steps, denoising_end, guidance_scale, negative_prompt, negative_prompt_2, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, callback, callback_steps, cross_attention_kwargs, guidance_rescale, original_size, crops_coords_top_left, target_size, attention_auto_machine_weight, gn_auto_machine_weight, style_fidelity, reference_attn, reference_adain) 734 ref_xt = self.scheduler.scale_model_input(ref_xt, t) 736 MODE = "write" --> 738 self.unet( 739 ref_xt, 740 t, 741 encoder_hidden_states=prompt_embeds, 742 cross_attention_kwargs=cross_attention_kwargs, 743 added_cond_kwargs=added_cond_kwargs, 744 return_dict=False, 745 ) 747 # predict the noise residual 748 MODE = "read"
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, *kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py:966, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, encoder_attention_mask, return_dict) 956 sample, res_samples = downsample_block( 957 hidden_states=sample, 958 temb=emb, (...) 963 **additional_residuals, 964 ) 965 else: --> 966 sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale) 968 if is_adapter and len(down_block_additional_residuals) > 0: 969 sample += down_block_additional_residuals.pop(0)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, *kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: StableDiffusionXLReferencePipeline.call..hacked_DownBlock2D_forward() got an unexpected keyword argument 'scale'
Reproduction
import torch from PIL import Image from diffusers.utils import load_image from diffusers import DiffusionPipeline, AutoencoderTiny from diffusers.schedulers import UniPCMultistepScheduler style_image = load_image("imgs/沙滩动漫.png").convert("RGB")
pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", custom_pipeline="stable_diffusion_xl_reference", torch_dtype=torch.float16, use_safetensors=True, variant="fp16", safety_checker=None, local_files_only=True,).to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) seed = torch.manual_seed(10240) result_img = pipe(ref_image=style_image, prompt="1girl", generator=seed, num_inference_steps=20, reference_attn=True, reference_adain=True).images[0] result_img
Logs
No response
System Info
diffusers
version: 0.21.0Who can help?
No response