NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Other
8.6k stars 1.2k forks source link

about the parameter setting in pose2pose train scripts #56

Closed kekedan closed 6 years ago

kekedan commented 6 years ago

Hi,the parameters setting in pose2pose train scripts is actual parameters in your experiment, or just an example ? such as the "--niter 5 --niter_decay 5 " thank you !

tcwang0509 commented 6 years ago

That is what I used for training a ~3 mins video. You can modify it based on your need.

kekedan commented 6 years ago

@tcwang0509 thank you , and I have another question: The gif about pose2pose you gave is only a few seconds, and the target woman do not have a large movement. can you you give a complete video demo like " everybody dance now ",so we can can understand the effect of the model more clearly.

tcwang0509 commented 6 years ago

What do you mean by large movement? Like backflip? The motions will definitely be limited by the training video, no matter what algorithm you use. Longer examples are already in our YouTube video.