NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Other
1.8k stars 274 forks source link

Our Training Tutorial, hope you like it! #54

Open Peng2017 opened 4 years ago

Peng2017 commented 4 years ago

We made a tutorial of training few shot vid2vid network and styleGAN, hope you like it! You can use styleGAN and its latent code to generate few-shot-vid2vid input data with spacial-continuity, which is helpful of training a vid2vid network with higher accuracy and more details like teeth. https://www.youtube.com/watch?v=zkWHTHFUYrM&lc=Ugwp3pNEoUC5m98xzfB4AaABAg

zgxiangyang commented 4 years ago

this video is not completed.

aminesoulaymani commented 3 years ago

this is so nice from you