junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
22.8k stars 6.29k forks source link

Pix2pix segmentation soft labels (expected hard labels) #1587

Closed MihaelaCroitor closed 1 year ago

MihaelaCroitor commented 1 year ago

Hi!

I am trying to use pix2pix model for an image translation task from a 3 channel input to a 1 channel semantic segmentation mask. I trained the model and tested it on a test set. I have provided hard label ground truth segmentations (as real_B). Still, in the results folder, the real_B image (and the fake_B image) has soft labels. Instead of black and white pixels, the image is more pixelated (gray shade around the edges). Now I understand how this could happen for the prediction, but why is the ground truth like this since I provided black and white segmentation in trainB and testB? Is there an additional preprocessing step that I am not aware of? P.S.: I cropped the image prior to passing it to the model, so I use --preprocess none.

MihaelaCroitor commented 1 year ago

Saving the combined image as .png instead of .jpg solves the issue.