showlab / Tune-A-Video

[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
https://tuneavideo.github.io
Apache License 2.0
4.22k stars 384 forks source link

Unet model #11

Closed Laveena-S closed 1 year ago

Laveena-S commented 1 year ago

No pretrained unet model provided

dawei03896 commented 1 year ago

+1

zhangjiewu commented 1 year ago

The pretrained unet model is under the folder of stable-diffusion-v1-4.

dawei03896 commented 1 year ago

Thank you for your reply. I have a question. Does every input source video and source_prompt need to be retrained?

zhangjiewu commented 1 year ago

Yes.

zhangjiewu commented 1 year ago

Closed as solved. Feel free to reopen it if you have any other question.