alessiospuriomancini / cosmopower

Machine Learning - accelerated Bayesian inference
https://alessiospuriomancini.github.io/cosmopower
GNU General Public License v3.0
58 stars 26 forks source link

Bugfix for multiple network training #16

Closed HTJense closed 1 year ago

HTJense commented 1 year ago

When training multiple networks in a row, a bug appears related to default parameters.

Essentially, attempting something like this:

nn_1 = cp.cosmopower_NN(...)
nn_1.train(...)

nn_2 = cp.cosmopower_NN(...)
nn_2.train(...)

can cause an issue within Tensorflow. This is caused by the optimizer = tf.keras.optimizers.Adam() default parameter of cosmopower_NN and cosmopower_PCAplusNN - it causes both networks to use the same optimizer object rather than creating a new one individually, which freaks out TF when training the second network.

This pull request fixes it by making the cosmopower object instantiate the optimizer during the __init__() call instead of in its defaults parameters.