Closed nvcuong closed 4 years ago
Thanks for raising this issue (a very good catch!). As currently written, the regularizer is shared for gamma1, beta1, gamma2, beta2 (in every convolutional layer). The intention was that they not be shared, as is described in our paper. Due to a programming oversight, we didn't realize that the torch.nn.Parameter API was just re-using the data input as opposed to creating a new instance. A fix will be pushed to the repo shortly. However, this change will not affect the resulting classification accuracy. The results with and without sharing are almost identical. Thank you again for raising this issue!
This changed has now been pushed.
https://github.com/cambridge-mlg/cnaps/blob/5a11dc0c450b0b0728de98eb64a0d1bc83035a2e/src/adaptation_networks.py#L148
I have a question regarding the regularizer at the link above. Is the regularizer shared for all the gamma1, beta1, gamma2, beta2 ?