Closed zonexo closed 3 years ago
Hi,
I finally figured out the problem. The lr_normalizer has been normalized, hence their values in talos are different from the actual ones in keras or tensorflow:
if optimizer == Adadelta: pass elif optimizer == SGD or optimizer == Adagrad: lr /= 100.0 elif optimizer == Adam or optimizer == RMSprop: lr /= 1000.0 elif optimizer == Adamax or optimizer == Nadam: lr /= 500.0
Hopefully this can help others who's like me.
I'm getting totally different MSE from actual run and talos 1.0 hyperparameter optimization. I'm using tensorflow 2.1.0
Initially I used talos to get the best hyperparameters. From the csv, I got MSE ~ 1e-4 which is good, as shown:
loss mean_squared_error val_loss val_mean_squared_error neurons
0.000106636 0.000106636 0.001984425 0.001984425 400
activation batch_size dropout epochs hidden_layers kernel_initializer lr
LeakyReLU 50 0 400 16 he_uniform 0.01
optimizer
<class 'tensorflow.python.keras.optimizer_v2.nadam.Nadam'>
I then use this parameters in my orginal code w/o talos. But the ans I got was NAN. Why is this so? How do I debug?
Part of my code is as follows:
Thanks.