-
Thanks for your great work!
Do you have any plan to release the training code?
-
integrate video synthesis with MASS framework for audio and music
-
### Motivation
- In order to work with the interviews in order to extract insights, we need to first store the raw interview data in a way that makes it accessible for people who will be working on…
-
The company is doing video and audio synthesis, timestamps need DJI UAV video, need milliseconds Level of time stamp, the time to success, Is there a way to get it right now?
-
### Model/Pipeline/Scheduler description
This work aims to learn a high-quality text-to-video (T2V) generative model by leveraging a pre-trained text-to-image (T2I) model as a basis. It is a highly…
-
I am getting the following error log when I try running the training for audiovisual synthesis after training my audio branch.
```
--- ./oliver_train/ already exists! ---
--- Totally 18227 vid…
-
## 🚀 Feature
Adding toy video datasets for generative models such as Moving MNIST and Robonet
## Motivation
The currently available video datasets are suitable for recognition and not synthesis…
-
Hi, I have been working on you project for video synthesis. I have successfully been able to generate the video with good quality lip syncing. But the output video seems to have some un-natural should…
-
Thank your excellent work!
How to use a source image and a drive video to generate video-based reenactment?The demo you provided is image-based head reenactment.
-
I find that you use unprocessed video + PG noise model to generate enough training video pairs, then tuning with the real video dataset.
Considering the train/dev real scenes is 6, and the unpro…