google-deepmind / functa

Apache License 2.0
143 stars 7 forks source link

Setting batch modulations to zero #10

Closed bkoyuncu closed 1 year ago

bkoyuncu commented 1 year ago

Hi,

Thanks for making the code available! I was not able to find the implementation of set batch modulations to zero (it is defined in 4th line of Algorithm 1 in the paper) in the repo. Would you mind pointing that part to us?

Thank in advance, Best regards

hyunjik11 commented 1 year ago

Hi, this is done by setting latent_init_scale=0. in experiment_meta_learning.py. I've just made the change and pushed to the repo, so you should be able to see this change in the file. I hope that helps.

bkoyuncu commented 1 year ago

Hi thanks so much, does this initialization also guarantee that modulations are zerod after every outer loop training?

hyunjik11 commented 1 year ago

Yes. Setting this to 0 ensures that the latent modulations that are part of the self._params are initialized to 0 inside the __init__ of experiment_meta_learning.py. These initial values are fixed throughout the meta-learning training since the _update_func only updates the weights and keeps the modulations fixed (see here).