Closed redhat12345 closed 6 years ago
Hi.
The default value for the model class logDcoral
is indeed 128
.
https://github.com/pmorerio/minimal-entropy-correlation-alignment/blob/ef6a2b89af29eeda14ad68b26d7f22c19999662c/svhn2mnist/model.py#L9
However, when the model is instantiated in main.py
, the hidden_size
argument is passed with value 64
, coherently with the paper.
https://github.com/pmorerio/minimal-entropy-correlation-alignment/blob/ef6a2b89af29eeda14ad68b26d7f22c19999662c/svhn2mnist/main.py#L18
Thank you so much.
In your paper you mentioned that "The architecture is the very same employed in [Ganin & Lempitsky (2015)] with the only difference that the last fully connected layer (fc2) has only 64 units instead of 2048. Performances are the same, but covariance computation is less onerous. fc2 is in fact the layer where domain adaptation i performed." But in your code I found that (may be I am wrong), fc2 has 128 units. Can you please explain here a little bit more to understand me please?
hidden_size = 128
self.hidden_repr_size = hidden_size
net = slim.fully_connected(net, self.hidden_repr_size, activation_fn=tf.tanh,scope='fc4')