Kosinkadink / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI and Advanced Sampling Support
Apache License 2.0
2.7k stars 206 forks source link

vpred model support? #287

Open joachim508 opened 9 months ago

joachim508 commented 9 months ago

the normal animatediff workflow with vpred base model only produces noisy images, is there any way to make it work like normal sd1.5 models do?

Kosinkadink commented 9 months ago

Can you link me what a vpred model is? I'm not familiar.

joachim508 commented 9 months ago

https://civitai.com/models/199059/9th-tail https://civitai.com/models/145481?modelVersionId=297793 https://medium.com/@zljdanceholic/three-stable-diffusion-training-losses-x0-epsilon-and-v-prediction-126de920eb73 the vpred model requires a config file alongside the checkpoint I managed to produce normal v2v using vpred model with rave addon https://github.com/spacepxl/ComfyUI-RAVE, since it has a specialized ksampler node. So i wonder if the vpred model has any conflicts with animatediff logic or default ksampler process?

K1LL3RPUNCH commented 8 months ago

+1