can cause an issue within Tensorflow. This is caused by the optimizer = tf.keras.optimizers.Adam() default parameter of cosmopower_NN and cosmopower_PCAplusNN - it causes both networks to use the same optimizer object rather than creating a new one individually, which freaks out TF when training the second network.
This pull request fixes it by making the cosmopower object instantiate the optimizer during the __init__() call instead of in its defaults parameters.
When training multiple networks in a row, a bug appears related to default parameters.
Essentially, attempting something like this:
can cause an issue within Tensorflow. This is caused by the
optimizer = tf.keras.optimizers.Adam()
default parameter ofcosmopower_NN
andcosmopower_PCAplusNN
- it causes both networks to use the same optimizer object rather than creating a new one individually, which freaks out TF when training the second network.This pull request fixes it by making the cosmopower object instantiate the optimizer during the
__init__()
call instead of in its defaults parameters.