showlab / Tune-A-Video

[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
https://tuneavideo.github.io
Apache License 2.0
4.15k stars 377 forks source link

Request for release of selected DAVIS videos and associated captions & prompts #74

Open knightyxp opened 1 year ago

knightyxp commented 1 year ago

Hello,

I'd like to thank you for the great work on the video editing benchmark. Many in the community are now adopting it for state-of-the-art methods comparison.

I would like to kindly request if it's possible to release the selected videos from the DAVIS dataset along with the BLIP-generated captions and the manually modified 140 prompts. This would greatly benefit the community by enabling more reliable ablation studies and facilitating following the work closely.

Thank you for your consideration!

Best regards,

zhangjiewu commented 1 year ago

We have recently expanded the evaluation benchmark in the Tune-A-Video paper and released a new benchmark for text-guided video editing, namely LOVEU-TGVE dataset. Please follow this instruction to download the dataset. Additionally, the evaluation code and leaderboard are available on our github repo.