ArtVentureX / comfyui-animatediff

AnimateDiff for ComfyUI
Apache License 2.0
682 stars 44 forks source link

TypeError: string indices must be integers #59

Closed GamerYuan closed 11 months ago

GamerYuan commented 11 months ago

Error when using video input for control net for AnimateDiff

From my limited testing, currently it only seems to work when input frames count is lower than 16? Not sure if I'm missing something.

Error:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "D:\ComfyUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\ComfyUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\ComfyUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample
    return super().sample(
  File "D:\ComfyUI\nodes.py", line 1237, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\ComfyUI\nodes.py", line 1207, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 74, in sample
    return orig_comfy_sample(model, *args, **kwargs, callback=callback)
  File "D:\ComfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\ComfyUI\comfy\samplers.py", line 728, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\ComfyUI\comfy\samplers.py", line 633, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "D:\ComfyUI\comfy\samplers.py", line 589, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
  File "D:\Stable Diffusion\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\Stable Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\ComfyUI\comfy\samplers.py", line 287, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
  File "D:\Stable Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\ComfyUI\comfy\k_diffusion\external.py", line 129, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\ComfyUI\comfy\k_diffusion\external.py", line 155, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\ComfyUI\comfy\samplers.py", line 275, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
  File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 466, in sampling_function
    cond, uncond = sliding_calc_cond_uncond_batch(
  File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 442, in sliding_calc_cond_uncond_batch
    sub_cond_out, sub_uncond_out = calc_cond_uncond_batch(
  File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 260, in calc_cond_uncond_batch
    p = get_area_and_mult(x, x_in, cond_concat_in, timestep)
  File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 151, in get_area_and_mult
    control = cond[1]["control"]
TypeError: string indices must be integers

Below is my workflow: image

tungnguyensipher commented 11 months ago

There've been some breaking change in sampling function of ComfyUI just recently. This extension has been updated to be compatible with the latest version of ComfyUI. Please make sure to update both ComfyUI and this extension to the latest commit and try again.

GamerYuan commented 11 months ago

Hi there @tungnguyensipher , thanks for the reply. I have just pulled the latest version for both ComfyUI and this extension. The issue seems to still exist, with a different error message

Error occurred when executing AnimateDiffSampler:

list indices must be integers or slices, not str

File "D:\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample
return super().sample(
File "D:\ComfyUI\nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "D:\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 77, in sample
return orig_comfy_sample(model, *args, **kwargs, callback=callback)
File "D:\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\ComfyUI\comfy\samplers.py", line 728, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\ComfyUI\comfy\samplers.py", line 633, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "D:\ComfyUI\comfy\samplers.py", line 589, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
File "D:\Stable Diffusion\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "D:\Stable Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\ComfyUI\comfy\samplers.py", line 287, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "D:\Stable Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\ComfyUI\comfy\k_diffusion\external.py", line 129, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\ComfyUI\comfy\k_diffusion\external.py", line 155, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\ComfyUI\comfy\samplers.py", line 275, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 418, in sampling_function
cond, uncond = sliding_calc_cond_uncond_batch(
File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 395, in sliding_calc_cond_uncond_batch
sub_cond_out, sub_uncond_out = calc_cond_uncond_batch(
File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 214, in calc_cond_uncond_batch
p = get_area_and_mult(x, x_in, timestep)
File "D:\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sliding_context_sampling.py", line 129, in get_area_and_mult
model_conds = conds["model_conds"]

Thanks in advance!

tungnguyensipher commented 11 months ago

Can you share your workflow json file?

GamerYuan commented 11 months ago

Here it is, my bad for not attaching it on the previous reply animatediff_workflow.json

Edit: Upon further inspection, it seems that trying to generate above 16 frames with or without a video input results in this error for me

tungnguyensipher commented 11 months ago

@GamerYuan, I've just made an update and pushed it to the repository. Could you please pull these changes and test to see if the issue has been resolved?

GamerYuan commented 11 months ago

Thanks for the quick fix! I have tested it several times and it seems to be working alright. Will be closing this issue as resolved