Can somebody elucidate why have the ADDA codes (this as well as the TensorFlow one) used two feature maps output from the discriminator instead of one? I am wondering why here in adapt.py, we concatenate the source and target features, and then pass the concatenated features to the discriminator for prediction?
Can somebody elucidate why have the ADDA codes (this as well as the TensorFlow one) used two feature maps output from the discriminator instead of one? I am wondering why here in adapt.py, we concatenate the source and target features, and then pass the concatenated features to the discriminator for prediction?
Why not use one and do one prediction at a time, as how it is done in most GAN examples (say here - https://github.com/pytorch/examples/blob/master/dcgan/main.py)??