NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Other
8.6k stars 1.2k forks source link

web/images outputs appear to be entirely red images #68

Open pkmital opened 6 years ago

pkmital commented 6 years ago

Hi,

Thank you for a fascinating release. I've tried a few epochs with my own dataset of images and finding the images produced during training have many fully red image outputs (example screenshot below), though these are not like the images in my dataset. Is there something I could be doing wrong or is this expected behavior? Please let me know if you need more information!

Current train command:

python train.py --name cpvs --loadSize 512 --n_frames_total 30 --max_frames_per_gpu 2 --n_downsample_G 1 --num_D 1 --dataroot datasets/cpvs/ --n_gpus_gen -1 --print_freq 5 --niter 10000 --niter_decay 5000 --nThreads 4 --save_epoch_freq 1

Example output:

screen shot 2018-11-01 at 12 54 40 am
tcwang0509 commented 6 years ago

It's definitely not expected. Is your real image 3-channel?

pkmital commented 6 years ago

Yes both A & B are 3 channel images. Attaching a pair for reference (note: they are different sizes, is this an issue?)

one image of a sequence in A:

00000090

one image of a sequence in B:

00000090