Open gray311 opened 2 weeks ago
Hi, I am very interested in your work!
I'd like to know whether Weather FancyVideo received the initial frame + text prompt (at the same time) to generate the corresponding videos.
for example
video = pipe( prompt=prompt, # text prompt image=image, #start frame num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0]
Yes, when using the i2v model, you are essentially generating a video based on the first frame and the accompanying text.
Hi, I am very interested in your work! I'd like to know whether Weather FancyVideo received the initial frame + text prompt (at the same time) to generate the corresponding videos. for example
video = pipe( prompt=prompt, # text prompt image=image, #start frame num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0]
Yes, when using the i2v model, you are essentially generating a video based on the first frame and the accompanying text.
look forward to the i2v training code for research.
Hi, I am very interested in your work!
I'd like to know whether Weather FancyVideo received the initial frame + text prompt (at the same time) to generate the corresponding videos.
for example