-
In the train.py, the code doesn't use sample['ref_pixel_values'] or sample['clip_pixel_values']. Does this mean the image to video generation training uses a random frame instead of the first frame?
-
Hi, is there any way to overlay the reconstructions(mesh) to the image, as you've done in your demo videos?
-
Thank you very much for your excellent work. I wanted to ask if it would be possible for you to provide code that directly generates videos from images, bypassing the first stage of text-to-image gene…
-
Hello,
Thank you for this very interesting work.
On your project page, you give the following example:
![make-up](https://github.com/user-attachments/assets/226c2912-56b0-43aa-85d1-8819ced1e7f…
-
Generation stuck at 99%. This is happening for me and other people experimenting around me.
![image](https://github.com/user-attachments/assets/bad67907-1603-4a89-aabc-7ab5d5f629a1)
-
### Project Title: Image and Audio-Driven Video Generation
**Description:**
This project focuses on creating a system that generates a realistic video from a single image and audio input. By fee…
-
According to documentation here:
https://github.com/aigc-apps/CogVideoX-Fun/blame/4de9773025a621824172ff61b5b7963ac18fd0e5/scripts/README_TRAIN_LORA.md#L15
> `train_mode` is used to specify the …
-
### Model/Pipeline/Scheduler description
Applying pretrained Text-to-Video (T2V) Diffusion models to Image-to-video (I2V) generation tasks using SDEdit often results in low source image fidelity in…
-
### Describe the feature
Adding remotion dev components to create a video generation page, which will eventually be running in backend.
### Add ScreenShots
![image](https://github.com/user-attachme…
-