Open mcvta opened 2 years ago
This seems to be a bug in the wateRtemp package. Can you file an issue there?
I also just pushed a commit to keras simplifying the layer_alpha_dropout()
wrapper, but it won't fix this bug.
Hi I have already done that:
https://github.com/MoritzFeigl/wateRtemp/issues/1
Can you run the the FNN (wt_fnn) with the test dataset that is provided here: https://github.com/MoritzFeigl/wateRtemp
Just to check if it is a bug in the original code
Thank you,
Hi everyone,
I´m running the Feed-Forward Neural Network (FNN) with R (4.1.2) and tensorflow (2.7.0) that is available from: https://github.com/MoritzFeigl/wateRtemp.
I´m running the test dataset that is available from the same source. After the optimization process when the model tries to run the best model I´m getting the following error:
** Starting FNN computation for catchment test_catchment *** Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/standard_FNN/scaling_values.csv Using existing scores as initial grid for the Bayesian Optimization Bayesian Hyperparameter Optimization: 40 iterations were already computed Run the best performing model as ensemble: 2022-02-02 13:06:51.059425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-02-02 13:06:51.060834: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Loaded Tensorflow version 2.7.0 Error in py_call_impl(callable, dots$args, dots$keywords) : TypeError: Exception encountered when calling layer "alpha_dropout" (type AlphaDropout).
'>' not supported between instances of 'dict' and 'float'
Call arguments received: • inputs=tf.Tensor(shape=(None, 42), dtype=float32) • training=None In addition: Warning message: In if (dropout_layers) { : the condition has length > 1 and only the first element will be used
This is the code that I´m using to run the model:
This are the parameters for the best model: layers = 3 units = 200 max_epoc = 100 early_stopping_patience = 5 batch_size = 60 dropout = 2.22044604925031E-16 ensemble =1
Can this problem be related with the very small value of dropout =2.22044604925031E-16 ?
Thank you