NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Other
8.61k stars 1.2k forks source link





vid2vid

Project | YouTube(short) | YouTube(full) | arXiv | Paper(full)

Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic video-to-video translation. It can be used for turning semantic label maps into photo-realistic videos, synthesizing people talking from edge maps, or generating human motions from poses. The core of video-to-video translation is image-to-image translation. Some of our work in that space can be found in pix2pixHD and SPADE.

Video-to-Video Synthesis
Ting-Chun Wang1, Ming-Yu Liu1, Jun-Yan Zhu2, Guilin Liu1, Andrew Tao1, Jan Kautz1, Bryan Catanzaro1
1NVIDIA Corporation, 2MIT CSAIL
In Neural Information Processing Systems (NeurIPS) 2018

Video-to-Video Translation

Prerequisites

Getting Started

Installation

Testing

Dataset

Training with Cityscapes dataset

Training with face datasets

Training with pose datasets

Training with your own dataset

More Training/Test Details

Citation

If you find this useful for your research, please cite the following paper.

@inproceedings{wang2018vid2vid,
   author    = {Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Guilin Liu
                and Andrew Tao and Jan Kautz and Bryan Catanzaro},
   title     = {Video-to-Video Synthesis},
   booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},   
   year      = {2018},
}

Acknowledgments

We thank Karan Sapra, Fitsum Reda, and Matthieu Le for generating the segmentation maps for us. We also thank Lisa Rhee for allowing us to use her dance videos for training. We thank William S. Peebles for proofreading the paper.
This code borrows heavily from pytorch-CycleGAN-and-pix2pix and pix2pixHD.