pmorerio / minimal-entropy-correlation-alignment

Code for the paper "Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation", ICLR 2018
MIT License
63 stars 17 forks source link

SVHN → MNIST Architecture #2

Closed redhat12345 closed 6 years ago

redhat12345 commented 6 years ago

In your paper you mentioned that "The architecture is the very same employed in [Ganin & Lempitsky (2015)] with the only difference that the last fully connected layer (fc2) has only 64 units instead of 2048. Performances are the same, but covariance computation is less onerous. fc2 is in fact the layer where domain adaptation i performed." But in your code I found that (may be I am wrong), fc2 has 128 units. Can you please explain here a little bit more to understand me please? hidden_size = 128 self.hidden_repr_size = hidden_size

net = slim.fully_connected(net, self.hidden_repr_size, activation_fn=tf.tanh,scope='fc4')

pmorerio commented 6 years ago

Hi. The default value for the model class logDcoral is indeed 128. https://github.com/pmorerio/minimal-entropy-correlation-alignment/blob/ef6a2b89af29eeda14ad68b26d7f22c19999662c/svhn2mnist/model.py#L9 However, when the model is instantiated in main.py, the hidden_size argument is passed with value 64, coherently with the paper. https://github.com/pmorerio/minimal-entropy-correlation-alignment/blob/ef6a2b89af29eeda14ad68b26d7f22c19999662c/svhn2mnist/main.py#L18

redhat12345 commented 6 years ago

Thank you so much.