NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Other
1.79k stars 276 forks source link

[street] Totally black output on epoch 1 #74

Open jodusan opened 3 years ago

jodusan commented 3 years ago

I have downloaded cityscapes dataset and started training. After about 1.5hrs in on v100 outputs in checkpoints/street/web are all black (images that end with synthesized_image.jpg) and I'm wondering is this expected? (if not, should there be something on output images from the beginning or not?)

Thanks

arulpraveent commented 3 years ago

@dulex123 I'm having the same issue for the pose dataset instead of black images I'm getting white ones were you able to solve it?

crenaudineau commented 3 years ago

I have the similar problem. Do you have fix the issue ? Is it a small dataset problem ?

hellohawaii commented 3 years ago

I have the same problem when training on pose/face example dataset. On both example dataset, the synthesized_image.jpg looks normal on the first several hundreds iterations, but after some hundreds of iterations, the image synthesized become totally black or white(with a very small yellow margin of about 1 pixel). I observed that Df_fake and Df_real dropped to zero at the iteration when the image become white/black. Does anyone can help?

eastchun commented 2 years ago

I had the same problem and I delved into the code and found that the matting function in the line 214 in ./models/networks/generator.py is somehow disabled (I don't know why ???) and the warped images are never combined into the final images in this code.

If the line 214 is changed as in below:

`if not self.spade_combine: ---> if self.spade_combine`

Then, the problem of "all-black or all-white synthesized images after around 3000 ~ 4000 iterations in the epoch 1" might be disappeared (please try by yourself).