tdeboissiere / DeepLearningImplementations

Implementation of recent Deep Learning papers
MIT License
1.81k stars 650 forks source link

Reproducing results #41

Closed praxidike closed 7 years ago

praxidike commented 7 years ago

I am trying to reproduce the results from the pix2pix model and the facades dataset. However the results always look kind of messed up. The following image was generated after more than 300 epochs of training. These monochrome areas keep on appearing in every image.

grafik

Any hints what might have gone wrong? I use the used the exact same code here from GitHub.

tdeboissiere commented 7 years ago

Hard to say, but 2 hints:

jiangzidong commented 7 years ago

Maybe its your image-draw functions' issue, because I can reproduce the good result (backend tensorflow, patch size 64) what about the picture in pix2pix/figures/current_batch_training.png or current_batch_validation.png

AlexanderFabisch commented 7 years ago

I can reproduce this problem. Here is the code that I used to generate images. This is the result that I got:

generated

The pix2pix/figures/current_batch_validation.png however looks like this:

validation

How is this generated? I cannot find it in the code.

AlexanderFabisch commented 7 years ago

OK, I found it in the code: pix2pix/src/utils/data_utils.py

I added inverse normalization and now I got this:

generated

Is there anything else that I did not see? I use a batch size of 4 and a patch size of 64x64.

tdeboissiere commented 7 years ago

What happens if you increase the batch size ? When you said you applied inverse normalization, did you check that our images where uint8, pixel amplitude bounded between [0, 255] ?

AlexanderFabisch commented 7 years ago

It was indeed a problem with the inverse normalization (not the batch normalization). I fixed it and now the results look really great after 150 epochs on the validation set. Thanks for your help!

generated