Closed praxidike closed 7 years ago
Hard to say, but 2 hints:
Maybe its your image-draw functions' issue, because I can reproduce the good result (backend tensorflow, patch size 64) what about the picture in pix2pix/figures/current_batch_training.png or current_batch_validation.png
I can reproduce this problem. Here is the code that I used to generate images. This is the result that I got:
The pix2pix/figures/current_batch_validation.png however looks like this:
How is this generated? I cannot find it in the code.
OK, I found it in the code: pix2pix/src/utils/data_utils.py
I added inverse normalization and now I got this:
Is there anything else that I did not see? I use a batch size of 4 and a patch size of 64x64.
What happens if you increase the batch size ? When you said you applied inverse normalization, did you check that our images where uint8, pixel amplitude bounded between [0, 255] ?
It was indeed a problem with the inverse normalization (not the batch normalization). I fixed it and now the results look really great after 150 epochs on the validation set. Thanks for your help!
I am trying to reproduce the results from the pix2pix model and the facades dataset. However the results always look kind of messed up. The following image was generated after more than 300 epochs of training. These monochrome areas keep on appearing in every image.
Any hints what might have gone wrong? I use the used the exact same code here from GitHub.