junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
22.8k stars 6.29k forks source link

The test results of pix2pix are too bad. #1467

Open dlsgurdlfkd opened 2 years ago

dlsgurdlfkd commented 2 years ago

I am trying to convert an rgb image to an infrared image style through pix2pix training.

I have found that it works very well when learning.

However, if the test is performed with the same model and same parameters, the results are very poor. What is the reason?

Please help me.

(first picture is original The second picture is the picture saved during the train The third picture is the actual test result)

epoch675_real_A epoch675_fake_B 101306_fake_B .

junyanz commented 1 year ago

Could you share with us the training and test command lines? Did you use the same flags (e.g., -preprocess)?

taesungp commented 1 year ago

Another thing might be evaluating the model with eval() mode turned on and off (link). Could you run the test mode with and without the --eval option and see if that makes a difference?

ShubhamAbhayDeshpande commented 1 year ago

Another thing might be evaluating the model with eval() mode turned on and off (link). Could you run the test mode with and without the --eval option and see if that makes a difference?

I am also working on same project, to convert RGB to IR. I also have same observation as above. But in my case, running test without '--eval' does not do much good. My results are not as bad as above. But, there is certainly significant loss of detail in the fake images when running test. I have attached some images below as an example.

While training, the fake image (left) and real image (right) looks something like this. image

And during test the fake image (left) and real image (right) looks like this. image

Is there any other way for me to improve these results? I am currently using part of KAIST dataset.

junyanz commented 1 year ago

The model might overfit the training set. To prevent overfitting, you can either use a larger dataset or apply more aggressive augmentation (see the option --preprocess for more details.)