Open Lengyia opened 2 months ago
workflow :v2v.json Hope someone can help me, I have been troubled by this problem for 3 days~
You can confirm if AnimateDiff-Evolved is working as expected by running the basic txt2img workflow in the readme: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved?tab=readme-ov-file#txt2img
It works very well. The basic txt2img workflow has no problem.
specs of pc
i5 12600KF
32gb ram
rtx 4070ti oc 12gb
same problem , who knows how to solve it?
I doubt there is any problem. The generated image image here looks like the consequence of not using the required amount of latents for AnimateDiff to do its thing. The sweet spot for AnimateDiff is to sample 16 latents at a time (if doing more than 16, context options must be plugged in with a context_length=16 to trick AnimateDiff into sampling a sweet spot amount of latents at a time.)
For @Lengyia , an example workflow wor d just fine, so in their workflow they likely did not use the use the minimum amount of latents to get a good result.
hello,I have the same problem,are you solve this it?
I am now encountering a problem. When performing video rendering under comfyui, as long as the animatediff node is connected, the quality of the pictures generated by the collector will be very poor, and each picture will be marked with a mosaic of color blocks. If animatediff is removed, the generated image will have no problem, but such an image cannot guarantee that subsequent videos will not be motivated. I have tried many methods but none worked (nodes such as Comfyui and animatediff are all the latest versions). I have modified Lora weights, sampling methods, animatediff models, prompt words, replaced large models and vae, but there has been no change. I have the same problem even when I run it on a cloud server. Can anyone help me with the answer? Thank you very much~ Below is the picture generated by adding the animatediff node, the quality is very poor.
workflow Link:https://pan.quark.cn/s/4d79e27520f0