cambridge-mlg / cnaps

Code for: "Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes" and "TaskNorm: Rethinking Batch Normalization for Meta-Learning"
MIT License
159 stars 22 forks source link

Question regarding the regularizer #2

Closed nvcuong closed 4 years ago

nvcuong commented 4 years ago

https://github.com/cambridge-mlg/cnaps/blob/5a11dc0c450b0b0728de98eb64a0d1bc83035a2e/src/adaptation_networks.py#L148

I have a question regarding the regularizer at the link above. Is the regularizer shared for all the gamma1, beta1, gamma2, beta2 ?

jfb54 commented 4 years ago

Thanks for raising this issue (a very good catch!). As currently written, the regularizer is shared for gamma1, beta1, gamma2, beta2 (in every convolutional layer). The intention was that they not be shared, as is described in our paper. Due to a programming oversight, we didn't realize that the torch.nn.Parameter API was just re-using the data input as opposed to creating a new instance. A fix will be pushed to the repo shortly. However, this change will not affect the resulting classification accuracy. The results with and without sharing are almost identical. Thank you again for raising this issue!

jfb54 commented 4 years ago

This changed has now been pushed.