Kosinkadink / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI and Advanced Sampling Support
Apache License 2.0
2.3k stars 175 forks source link

The reason why the video quality generated by connecting animatediff node is very poor #362

Open Lengyia opened 2 months ago

Lengyia commented 2 months ago

I am now encountering a problem. When performing video rendering under comfyui, as long as the animatediff node is connected, the quality of the pictures generated by the collector will be very poor, and each picture will be marked with a mosaic of color blocks. If animatediff is removed, the generated image will have no problem, but such an image cannot guarantee that subsequent videos will not be motivated. I have tried many methods but none worked (nodes such as Comfyui and animatediff are all the latest versions). I have modified Lora weights, sampling methods, animatediff models, prompt words, replaced large models and vae, but there has been no change. I have the same problem even when I run it on a cloud server. Can anyone help me with the answer? Thank you very much~ Below is the picture generated by adding the animatediff node, the quality is very poor. ComfyUI_temp_laprl_00009_ workflow Link:https://pan.quark.cn/s/4d79e27520f0

Lengyia commented 2 months ago

workflow :v2v.json Hope someone can help me, I have been troubled by this problem for 3 days~

Kosinkadink commented 2 months ago

You can confirm if AnimateDiff-Evolved is working as expected by running the basic txt2img workflow in the readme: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved?tab=readme-ov-file#txt2img

Lengyia commented 2 months ago

It works very well. The basic txt2img workflow has no problem.

specs of pc i5 12600KF 32gb ram rtx 4070ti oc 12gb 屏幕截图 2024-04-27 014833 屏幕截图 2024-04-27 015226

abc-kkk commented 1 month ago

same problem , who knows how to solve it?

Kosinkadink commented 1 month ago

I doubt there is any problem. The generated image image here looks like the consequence of not using the required amount of latents for AnimateDiff to do its thing. The sweet spot for AnimateDiff is to sample 16 latents at a time (if doing more than 16, context options must be plugged in with a context_length=16 to trick AnimateDiff into sampling a sweet spot amount of latents at a time.)

For @Lengyia , an example workflow wor d just fine, so in their workflow they likely did not use the use the minimum amount of latents to get a good result.

zhumaokun commented 1 month ago

hello,I have the same problem,are you solve this it?