I am trying to modify ADDA from your code. In the original paper, there seems to be 2 CNNs in the second training phase, one to process the examples of the target set and another with the frozen weights to process the examples of the source set. In this way, the CNN of the target set is 'adjusted' to that of the source set depending on the output of the domain discriminator.
However, when reviewing the code it seems that only a single CNN is used, adjusting in a first phase the model (extractor+classifier) and in a second phase only the domain discriminator. My questions is how then the training of both phases is reproduced in the library. Thank you very much!
Indeed, the current implementation is different from the original ADDA.
In a previous version, I implement ADDA with gradient reverse layer.
You can also refer to this commit which implements improved adversarial loss as proposed in ADDA, similar to GAN.
However, I do not observe better performance compared to DANN. As a result, I only keep the current version. Maybe you can achieve better performance through careful tuning.
Hi,
I am trying to modify ADDA from your code. In the original paper, there seems to be 2 CNNs in the second training phase, one to process the examples of the target set and another with the frozen weights to process the examples of the source set. In this way, the CNN of the target set is 'adjusted' to that of the source set depending on the output of the domain discriminator.
However, when reviewing the code it seems that only a single CNN is used, adjusting in a first phase the model (extractor+classifier) and in a second phase only the domain discriminator. My questions is how then the training of both phases is reproduced in the library. Thank you very much!
Best regards,
Eva