Official PyTorch implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
MIT License
2.51k
stars
476
forks
source link
How to understand the CAM loss for the generator including fake_B2A and fake_A2A? #39
As for the generator, what does the cam want to classify?
Classify the real image to 0? Classify the fake image to 1?
As for the discriminator, we can see that the real images will be classified to 1 and the fake images will be classified to 0.
Therefore, I am confusing about the CAM for the generator.
I've read the paper. According to that, it seems like they want to tune the feature maps in the decoding part be more like the images from the target domain rather than the source domain.
As for the generator, what does the cam want to classify? Classify the real image to 0? Classify the fake image to 1?
As for the discriminator, we can see that the real images will be classified to 1 and the fake images will be classified to 0. Therefore, I am confusing about the CAM for the generator.