Open al-swaiti opened 6 days ago
It's under development on it's own branch here: https://github.com/kijai/ComfyUI-CogVideoXWrapper/tree/1.5_test
嗨 Kijai,I think this is a project worth paying attention to. https://github.com/Kmcode1/SG-I2V
嗨 Kijai,I think this is a project worth paying attention to. https://github.com/Kmcode1/SG-I2V
That is for StableVideoDiffusion, we already have Tora for CogVideoX that works very similarly.
嗨 Kijai,I think this is a project worth paying attention to. https://github.com/Kmcode1/SG-I2V
That is for StableVideoDiffusion, we already have Tora for CogVideoX that works very similarly.
Yes, they are all great, but this one seems to have additional arbitrary camera movement control.
I belive that the video models must be classified as using loras , which is more important than how many barameters trained on , (train the basic model on just basic movements and cover others by loras )
Can we use any of the 5b workflows, but increased from 49 frames to say 240? Or what's the changes needed to do a 10 second one?
And is this model more "dynamic" to choose between 1-10 seconds, but needs to be say 24 fps native, or 12 with interpolation to make it even smoother?
Or if workflow needs to be way different, could you share the basic one needed to test the "test version" out?
@DuckersMcQuack Tell on the prompt to do things "fast" "rapid" and words like that. Then go to something, like I use Topaz, -> slow motion -> interpolation -> grab last frame and repeat. Final result needs to get a nice fix on the fps you want, after the editing is done. That's my way to extend the take, in my desired fps. Also some color correction between the stitches is needed too.
I shoot an old school Enki-Bilal+anime animation since yesterday, with Fun-5b-I2V, I have the numbers that work on dystopian anime style I want to approach (he numbers I see here and there being used are wrong), and I use this tecknique and my imagination to reach my goal. Cogvideo depends HEAVILY on prompting. I tell you that for sure. Flowrence-2 can help. When it is done I ll post my resulted full shot without sound. Then I ll use sound as well to participate in some online festival or something just for my pleasure. My nickname is @zazoum1 just in case you wanna see it and the "how-to-do-it-process" when it will be posted, BUT I WARN you I do there A LOT of NSFW stuff, so if you are a minor or not into these things don't search for me. I don't make any money with internet, so this basically this is not an advertisement.
https://github.com/user-attachments/assets/97095301-fe60-46c1-b004-01e12c894587 my first success using Kijai/CogVideoX-5b-1.5
https://github.com/user-attachments/assets/97095301-fe60-46c1-b004-01e12c894587 my first success using Kijai/CogVideoX-5b-1.5
There is still something very wrong with the diffusers implementation when using some resolutions, the old 720x480 included, the results are far better at higher res, for example I2V 1024x608:
https://github.com/user-attachments/assets/461957f1-c9b6-4865-8689-98ca18192696
T2V 768x768:
https://github.com/user-attachments/assets/5d7a8fdf-2a55-4d58-aa04-c49c6d754bb5
Also the beginning being corrupt on T2V is a continuing issue.
Can you please post workflows? i have no clue how you're getting such good results w/1.5, mine are horrible
Can you please post workflows? i have no clue how you're getting such good results w/1.5, mine are horrible
Workflows are the same, the supported resolutions are different and we don't really know what works best yet, besides the default 1360x768 what has worked for me is 768x768 and 936x640. Generally anything larger than 640 can work, the old default resolution does not work.
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
I forgot to link this https://github.com/huggingface/diffusers/pull/9877, it's what I was referring to. Are we going to get a different diffusers model or is it the same one you have uploaded?
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
I forgot to link this huggingface/diffusers#9877, it's what I was referring to. Are we going to get a different diffusers model or is it the same one you have uploaded?
It's converted with their script so should be same, it works very nicely currently in the nodes.
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
I forgot to link this huggingface/diffusers#9877, it's what I was referring to. Are we going to get a different diffusers model or is it the same one you have uploaded?
It's converted with their script so should be same, it works very nicely currently in the nodes.
Awesome! I will test it this afternoon, thanks for your awesome work as always!
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
I forgot to link this huggingface/diffusers#9877, it's what I was referring to. Are we going to get a different diffusers model or is it the same one you have uploaded?
It's converted with their script so should be same, it works very nicely currently in the nodes.
Awesome! I will test it this afternoon, thanks for your awesome work as always!
How is the video generated by cogvideox 1.5 i2v? Is it much better than 1.0
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
I forgot to link this huggingface/diffusers#9877, it's what I was referring to. Are we going to get a different diffusers model or is it the same one you have uploaded?
It's converted with their script so should be same, it works very nicely currently in the nodes.
Awesome! I will test it this afternoon, thanks for your awesome work as always!
How is the video generated by cogvideox 1.5 i2v? Is it much better than 1.0
I've been mostly playing with the DimensionX LoRAs with it, the quality is much higher:
https://github.com/user-attachments/assets/feda6249-f69c-493d-8fa0-465498a1e47e
https://github.com/user-attachments/assets/db6c7c4e-94a2-45f4-9d21-54f5941eeb90
@kijai Is it correct that Diffusers is almost completed? Does this mean final merge is coming and we're getting diffusers 1.5?
Since the last update it's been working well.
I forgot to link this huggingface/diffusers#9877, it's what I was referring to. Are we going to get a different diffusers model or is it the same one you have uploaded?
It's converted with their script so should be same, it works very nicely currently in the nodes.
Awesome! I will test it this afternoon, thanks for your awesome work as always!
How is the video generated by cogvideox 1.5 i2v? Is it much better than 1.0
I've been mostly playing with the DimensionX LoRAs with it, the quality is much higher:
cogvideo_orbits_00002.mp4 dimensionxoldman_00005.mp4
Thanks for your awesome work. It work for any resolution now. But it will run for a bit long time(8min for bf16 1360*768 49 frames for 3090ti). Great work! Thank you KJ.
Hi Kijai, how are you? I consider you one of the best in the field of artificial intelligence, and honestly, I think there is a team under this name that I follow regularly. I always see that you are the first in providing the latest AI technologies to Comfy users and others. Today, I downloaded this tool to experiment with the video feature, and I manually downloaded the "Kijai/CogVideoX-5b-1.5" model. I didn’t find it in the list of available models, so I added it to the code manually . Could you help me adjust the code before I start studying it?