Open philnovv opened 2 years ago
In our method, the latent space on which the contrastive loss is enforced is not adversarial to the generator. In other words, the latent space tries to find correspondences between the input and the output. Therefore, we concluded it's not appropriate to use adversarial features to learn this embedding space.
Hi all,
Was wondering why the generator G was chosen as the network to extract intermediate feature maps for the contrastive loss. In other works on image translation, the discriminator D is usually re-fashioned as a feature extraction network. Has anyone experimented with using contrastive loss on features extracted from D?