Open euminds opened 1 week ago
Please find validation videos and text prompts via this link: https://hkustgz-my.sharepoint.com/:u:/g/personal/lwang592_connect_hkust-gz_edu_cn/EaZnJVMpR_JOv0pV1tx5cK0B9fKh4tFuWfdw0QMUMfWZsQ?e=g9I2Vh
There videos are collected from davis video dataset and other similar works - DMT and MotionDirector.
Please find validation videos and text prompts via this link: https://hkustgz-my.sharepoint.com/:u:/g/personal/lwang592_connect_hkust-gz_edu_cn/EaZnJVMpR_JOv0pV1tx5cK0B9fKh4tFuWfdw0QMUMfWZsQ?e=g9I2Vh
There videos are collected from davis video dataset and other similar works - DMT and MotionDirector.
Thank you very much !!
Please find validation videos and text prompts via this link: https://hkustgz-my.sharepoint.com/:u:/g/personal/lwang592_connect_hkust-gz_edu_cn/EaZnJVMpR_JOv0pV1tx5cK0B9fKh4tFuWfdw0QMUMfWZsQ?e=g9I2Vh
There videos are collected from davis video dataset and other similar works - DMT and MotionDirector.
Additionally, Could you clarify which loss function was used to train the model for the results presented in the paper?"
We adopt the debiased hybrid loss and blend noise initialization strategy(strength=0.5).
For other args
Hi,
Thank you for your insightful work and this repository.
I was wondering if you plan to release the benchmarks proposed in the paper. The paper mentions a validation set with 66 video-edit text pairs from sources like DAVIS, WebVID, and online resources. Could you provide more specifics on:
The exact videos selected from each dataset. The associated text prompts are used for video editing.
Any additional details on the dataset or where to access similar resources would be greatly appreciated.
Thanks!