NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Other
8.62k stars 1.2k forks source link

--no_first_img and --use_single_G training question #117

Open yukai-chiu opened 5 years ago

yukai-chiu commented 5 years ago

Hi, in the readme

Forcing the model to also synthesize the first frame by specifying --no_first_img. This must be trained separately before inference.

Does this mean that we have to mark the flag after training the model and retrain it separately? Or we could specify this flag during the training and it will training at the same time?

Also,

Using another generator which was trained on generating single images (e.g., pix2pixHD) by specifying --use_single_G. This is the option we use in the test scripts.

Could you explain more about how to connect the pix2pix model to vid2vid if I've trained pix2pix with my own dataset?

Thank you!

Any help would be appreciated.