Closed davyzhang closed 8 months ago
Unfortunately, due to the fundamental limitation (the way how CN works), it is absolutely impossible to support this. If someone else is able to support this, please direct me there, but I don’t believe so - even with diffusers / comfy, etc.
Thank you for your time for my immature idea, currently I can stay with same seed for the first frame to get the similar result, not controlled, but good enough. Feel free to close this issue.
Similarly to this issue, can we allow to have denoising strength at 0 for the first frame for img2img? I understand it requires high ds to allow the frames to change/prevent grayed out images; but with a high ds, the first frame is too much changed, and the initial image is lost. I think if the first frame can use ds 0.0, the rest will be much more closely aligned to the initial image (in my mind, anyway)
now it is possible.
keyframe:0
Expected behavior
Benefits
Better Control With start and potentially end under control, we can set a better start point for story telling. For now the rand seed decides everything
Longer Video This is actually a trick to get longer video on pikalab which can only allow 3 secs vid generation. However, they provide a parameter to make the first frame extactly as the image provided by user. So users can stitch their video by providing the last frame as the parameter of the next
Controlnet Traveling This could go further as a way to set controlnet by frame, just as prompt
Thanks for your efforts and this great exentension