WASasquatch / FreeU_Advanced

FreeU - Free Lunch, and Dinner.
MIT License
109 stars 11 forks source link

VanillaTemporalModule.forward() missing 1 required positional argument: 'encoder_hidden_states' #3

Open opensourcefan opened 1 year ago

opensourcefan commented 1 year ago

When trying to create an animation I encounter the following error:

Error occurred when executing KSampler:

VanillaTemporalModule.forward() missing 1 required positional argument: 'encoder_hidden_states'

File "/home/xxxxxxxx/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/xxxxxxxx/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/xxxxxxxx/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/xxxxxxxx/ComfyUI/nodes.py", line 1236, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/home/xxxxxxxx/ComfyUI/nodes.py", line 1206, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/home/xxxxxxxx/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 161, in animatediff_sample
return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_utils.py", line 162, in wrapped_function
return function_to_wrap(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/comfy/sample.py", line 97, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/xxxxxxxx/ComfyUI/comfy/samplers.py", line 785, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/xxxxxxxx/ComfyUI/comfy/samplers.py", line 690, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/home/xxxxxxxx/ComfyUI/comfy/samplers.py", line 630, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/comfy/samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/home/xxxxxxxx/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/comfy/samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 537, in sliding_sampling_function
cond, uncond = sliding_calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 519, in sliding_calc_cond_uncond_batch
sub_cond_out, sub_uncond_out = calc_cond_uncond_batch(model_function, sub_cond, sub_uncond, sub_x, sub_timestep, max_total_area, sub_cond_concat, model_options)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 431, in calc_cond_uncond_batch
output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
File "/home/xxxxxxxx/ComfyUI/comfy/model_base.py", line 63, in apply_model
return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/SeargeSDXL/modules/custom_sdxl_ksampler.py", line 70, in new_unet_forward
x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
File "/home/xxxxxxxx/ComfyUI/custom_nodes/FreeU_Advanced/nodes.py", line 173, in __temp__forward
h = forward_timestep_embed(module, h, emb, context, transformer_options)
File "/home/xxxxxxxx/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 61, in forward_timestep_embed
x = layer(x)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)

It occurs with this workflow which is known to be working. Once I delete FreeU_Advanced it all works fine.

Youtube_ControlNet_AnimateDiff.json.txt

jags111 commented 1 year ago

Had the same issue using animatediff as until the new one came in there was no problems. I had to sit and remove the FreeU advanced from all my workflows and there were so many to be edited. The issue never comes up with certain workflows but with AD it is more of a hassle.

WASasquatch commented 1 year ago

It's not compatible with AnimateDiff models. There has been no chance to ComfyUI UnetModel.forward thus nothing changed in FreeU code. Just a limitation of AnimateDiff or ComfyUI. AnimateDiff are not normal SD models. They are editing ComfyUI with patches just like FreeU does.

opensourcefan commented 1 year ago

It's not compatible with AnimateDiff models. There has been no chance to ComfyUI UnetModel.forward thus nothing changed in FreeU code. Just a limitation of AnimateDiff or ComfyUI. AnimateDiff are not normal SD models. They are editing ComfyUI with patches just like FreeU does.

Just in case it wasn't clear, I was not using your advanced node, just the normal built in FreeU node. FreeU Advanced simply being installed was conflicting with the AnimateDiff setup. Upon deleting it, all was well again.

WASasquatch commented 1 year ago

They are editing ComfyUI with patches just like FreeU [advanced] does.

Comfyanonymous isn't putting in patching ability for Input Blocks and Middle Block like I asked, which provide 1000x fold better results then output blocks that just burn and ruin images. You could ask AnimateDiff guys to put in patching for input/output and I could detect theirs, but I dunno if they care.

jags111 commented 1 year ago

Understand and will convey same to animate diff and raise the issue there so they can attend to it for same. Awesome and thanks for your feedback.

WASasquatch commented 1 year ago

Understand and will convey same to animate diff and raise the issue there so they can attend to it for same. Awesome and thanks for your feedback.

Looks like Comfy may add the Input/Middle patching, so at that point, FreeU advanced will not longer need to patch the UnetModel.forward