merlresearch / TI2V-Zero

Text-conditioned image-to-video generation based on diffusion models.
GNU Affero General Public License v3.0
34 stars 1 forks source link

How to provide multiple images to condition the video generation? #1

Open Fritskee opened 3 months ago

Fritskee commented 3 months ago

Hello,

Thank you for this very interesting work.

On your project page, you give the following example: make-up

However, when I look at the demo_img2vid.py or the gen_video_ufc.py code, I cannot seem to find where we can provide multiple frames as input to the model for the "video prediction" task.

Could you help me with this?

Kind regards, Frits

Fritskee commented 3 months ago

@Nithin-GK any pointers?

seohyun8825 commented 1 week ago

Have you figured it out? I'm also looking for multiple images conditioning

@Nithin-GK any pointers?