I am trying to use pix2pix model for an image translation task from a 3 channel input to a 1 channel semantic segmentation mask. I trained the model and tested it on a test set.
I have provided hard label ground truth segmentations (as real_B).
Still, in the results folder, the real_B image (and the fake_B image) has soft labels. Instead of black and white pixels, the image is more pixelated (gray shade around the edges). Now I understand how this could happen for the prediction, but why is the ground truth like this since I provided black and white segmentation in trainB and testB?
Is there an additional preprocessing step that I am not aware of?
P.S.: I cropped the image prior to passing it to the model, so I use --preprocess none.
Hi!
I am trying to use pix2pix model for an image translation task from a 3 channel input to a 1 channel semantic segmentation mask. I trained the model and tested it on a test set. I have provided hard label ground truth segmentations (as real_B). Still, in the results folder, the real_B image (and the fake_B image) has soft labels. Instead of black and white pixels, the image is more pixelated (gray shade around the edges). Now I understand how this could happen for the prediction, but why is the ground truth like this since I provided black and white segmentation in trainB and testB? Is there an additional preprocessing step that I am not aware of? P.S.: I cropped the image prior to passing it to the model, so I use --preprocess none.