Open ChAoss0910 opened 2 years ago
Hi, I´m having the same problem as Hao Chen, while testing the original data. I don´t know what is "_rng". Can this be related with tensorflow version? I´m using tensorflow 2.7.0. 9000 Thank you
data(test_catchment)
wt_preprocess(test_catchment)
train_data <- feather::read_feather("test_catchment/train_data.feather")
test_data <- feather::read_feather("test_catchment/test_data.feather")
wt_fnn(
train_data,
test_data = NULL,
catchment = NULL,
model_name = NULL,
seed = NULL,
n_iter = 40,
n_random_initial_points = 20,
epochs = 100,
early_stopping_patience = 5,
ensemble_runs = 5,
bounds_layers = c(1, 5),
bounds_units = c(5, 200),
bounds_dropout = c(0, 0.2),
bounds_batch_size = c(5, 150),
initial_grid_from_model_scores = TRUE
)
wt_fnn(train_data, test_data, "test_catchment", "standard_FNN")
OUTPUT
wt_fnn(train_data, test_data, "test_catchment", "standard_FNN") Starting FNN computation for catchment test_catchment Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/standard_FNN/scaling_values.csv
Random hyperparameter sampling: layers = 3, units = 105, dropout = 0.025, batch_size = 27, ensemble_runs = 1, Error in py_call_impl(callable, dots$args, dots$keywords) : AttributeError: 'Context' object has no attribute '_rng'
I know that the error is related with the random number generator (rng)
I know that the error is related with the random number generator (rng)
Hi just found out this could be resolved by adding a random seed number in the input parameters like this:
> wt_fnn(train_data = train_data,test_data = test_data,catchment = "CM1",seed = 42,model_name = "fnn")
Since the default seed number is set as NULL in the source code.
Hi can you run this code?
library("wateRtemp")
library(tensorflow)
data(test_catchment)
wt_preprocess(test_catchment)
train_data <- feather::read_feather("test_catchment/train_data.feather")
test_data <- feather::read_feather("test_catchment/test_data.feather")
wt_fnn(
train_data,
test_data = NULL,
catchment = NULL,
model_name = NULL,
seed = NULL,
n_iter = 40,
n_random_initial_points = 20,
epochs = 100,
early_stopping_patience = 5,
ensemble_runs = 5,
bounds_layers = c(1, 5),
bounds_units = c(5, 200),
bounds_dropout = c(0, 0.2),
bounds_batch_size = c(5, 150),
initial_grid_from_model_scores = TRUE
)
wt_fnn(train_data,test_data,catchment = "test_catchment",seed = 42,model_name = "fnn")
Hi, It´s running... great!
Now I´m getting another error:
Error in py_call_impl(callable, dots$args, dots$keywords) : TypeError: Exception encountered when calling layer "alpha_dropout_174" (type AlphaDropout).
'>' not supported between instances of 'dict' and 'float'
Call arguments received: • inputs=tf.Tensor(shape=(None, 42), dtype=float32) • training=None
Now I´m getting another error:
Error in py_call_impl(callable, dots$args, dots$keywords) : TypeError: Exception encountered when calling layer "alpha_dropout_174" (type AlphaDropout).
'>' not supported between instances of 'dict' and 'float'
Call arguments received: • inputs=tf.Tensor(shape=(None, 42), dtype=float32) • training=None
Got same error...looking into it now
There may be missing values in the dataset?
This are the parameters for the best model: layers = 3 units = 200 max_epoc = 100 early_stopping_patience = 5 batch_size = 60 dropout = 2.22044604925031E-16 ensemble =1
Can this problem be related with the very small value of dropout =2.22044604925031E-16 ?
Hi, I got an issue when running the training code below.
I've double checked that there's no data format issue. Not sure if it's a bug with the source code.
Also tested with the original data given in the repo and got the same error:
Could you have a check if there's need of any upgrades with the newest version of dependent libraries? Looks like it has something wrong with the random number generator.