This is not an issue, i just fail to understand how is the dicsriminator loss calculated. In traditional GAN you'd label the fake image( segmentation output) as 0 and the groundtruth as 1, this is the vector you use as a "label" for the discriminator. What do we use in this case since the classification of fake or real is done pixel-wise? Do we create label maps of H×W×C full of ones for the groundtruth and full of zeros for the segmentation masks? I dont see how this would work..
This is not an issue, i just fail to understand how is the dicsriminator loss calculated. In traditional GAN you'd label the fake image( segmentation output) as 0 and the groundtruth as 1, this is the vector you use as a "label" for the discriminator. What do we use in this case since the classification of fake or real is done pixel-wise? Do we create label maps of H×W×C full of ones for the groundtruth and full of zeros for the segmentation masks? I dont see how this would work..