Vchitect / Latte

Latte: Latent Diffusion Transformer for Video Generation.
Apache License 2.0
1.45k stars 147 forks source link

some error to save output text to video #22

Open trongnk2106 opened 4 months ago

trongnk2106 commented 4 months ago

i have bug in line save video i have bug PyAVPlugin.write() got an unexpected keyword argument 'quality' I cant save output

maxin-cn commented 4 months ago

i have bug in line save video i have bug PyAVPlugin.write() got an unexpected keyword argument 'quality' I cant save output

Please run pip install imageio-ffmpeg.

trongnk2106 commented 4 months ago

It work, thank you so much. Great!! But when I change video_length prameter in t2v_sample.yaml from 16 to 32, its wrong : RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 1. How to make longer video

maxin-cn commented 4 months ago

It work, thank you so much. Great!! But when I change video_length prameter in t2v_sample.yaml from 16 to 32, its wrong : RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 1. How to make longer video

I have updated the code to support video generation larger than 16 frames. Since our t2v model is trained on a 16-frame video dataset, direct generation of video larger than 16 frames will result in low-quality generated video results. We will release a long video generation model in the future.

trongnk2106 commented 4 months ago

It work, thank you so much. Great!! But when I change video_length prameter in t2v_sample.yaml from 16 to 32, its wrong : RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 1. How to make longer video

I have updated the code to support video generation larger than 16 frames. Since our t2v model is trained on a 16-frame video dataset, direct generation of video larger than 16 frames will result in low-quality generated video results. We will release a long video generation model in the future.

Great! Can you give me the new inference for video generation large than 16 frames?

maxin-cn commented 4 months ago

It work, thank you so much. Great!! But when I change video_length prameter in t2v_sample.yaml from 16 to 32, its wrong : RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 1. How to make longer video

I have updated the code to support video generation larger than 16 frames. Since our t2v model is trained on a 16-frame video dataset, direct generation of video larger than 16 frames will result in low-quality generated video results. We will release a long video generation model in the future.

Great! Can you give me the new inference for video generation large than 16 frames?

Just update your code, then run bash sample/t2v.sh.

trongnk2106 commented 4 months ago

Thanks so much. But that repo Can generate video from images? I want to use generate video by image to improve from output t2v. I want to make video longer and highly quality. Thanks

maxin-cn commented 4 months ago

Thanks so much. But that repo Can generate video from images? I want to use generate video by image to improve from output t2v. I want to make video longer and highly quality. Thanks

Thank you for your interest, but image-to-video is not currently supported.

trongnk2106 commented 4 months ago

Oh, thank you so much.