Open Zaf97 opened 3 years ago
Hello! Does the original example run in your setup? Which versions of Keras and TensorFlow are you using? I have not tested much on the latest versions of Keras and TensorFlow and the problem might lie there. You should always add code to the question/issue for easier debugging, or you an also post a question in Stack Overflow.
The original example works when I call disable_eager_execution()!
My define_stacked_model function is the following:
def define_stacked_model(members):
//update all layers in all models to not be trainable
for i in range(len(members)):
model = members[i]
for layer in model.layers:
//make not trainable
layer.trainable = False
//rename to avoid 'unique layer name' issue
layer._name = 'ensemble_' + str(i+1) + '_' + layer.name
model.input._keras_history[0]._name = 'ensemble_' + str(i+1) + '_input'
//define multi-headed input
ensemble_visible = [model.input for model in members]
//concatenate merge output from each model
ensemble_outputs = [model.output for model in members]
merge = concatenate(ensemble_outputs)
mean = Dense(24, activation="linear")(merge)
var = Dense(24, activation="softplus")(merge)
model.compile(loss='mae',
metrics = ['mse', tf.metrics.MeanAbsolutePercentageError()],
optimizer='adam')
train_model = Model(ensemble_visible, mean)
pred_model = Model(ensemble_visible, [mean, var])
train_model.compile(loss=deep_ensemble_regression_nll_loss(var),
optimizer="adam",
metrics = ['mae', 'mse', tf.metrics.MeanAbsolutePercentageError()])
return train_model, pred_model
And I use DeepEnsembleRegressor like:
def fit_stacked_model(trainingId, train_x, train_y, val_x, val_y, patience = 25, members = None, num_models = 2):
///prepare input data
X = [train_x for _ in range(num_models)]
val_X = [val_x for _ in range(num_models)]
model = DeepEnsembleRegressor(lambda: define_stacked_model(members), 1)
disable_eager_execution()
hist = model.fit(X, train_y, epochs=30, validation_data=(val_X, val_y),
callbacks=[EarlyStopping(patience=patience, restore_best_weights=True)])
I am using tensorflow==2.6.0 and keras==2.6.0.
I think the only solution is to disable eager execution completely (at the beginning of your code).\
Sadly, the library doesn't work with keras 3.3.3 and tensorflow 2.16.1 anymore. Disabling eager executions didn't do. I'll try a refactoring, but my knowledge of keras and tensorflow might not be good enough.
@tobwen I already started a port to Keras 3, but its not yet complete (not everything works), see https://github.com/mvaldenegro/keras-uncertainty/tree/keras3
@tobwen I already started a port to Keras 3, but its not yet complete (not everything works), see https://github.com/mvaldenegro/keras-uncertainty/tree/keras3
Shame on me. I checked the forks, but not the most obvious thing: the branches. Thank you very much!
Dear Matitas,
I am trying to use the keras-uncertainty library but I am getting the following error.
"Exception has occurred: TypeError (note: full exception trace is shown but execution is paused at: tf__train_function)
Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model."
I found this answer in stack overflow which solves the problem in the model structure you have in the regression_deep_ensemble example: (https://stackoverflow.com/questions/65366442/cannot-convert-a-symbolic-keras-input-output-to-a-numpy-array-typeerror-when-usi)
In the first comment of the solution though it is mentioned that: "That's not a solution, in my case I'm running on GPU a model with LSTM layer, once I disable the eager execution, another error come's to be LSTM cannot use GPU it's not respecting the criteria"
In my case I also use a structure with LSTM layers, and when I call the DeepEnsembleRegressor constructor, like DeepEnsembleRegressor(lambda: define_stacked_model(members), 5) (since my define_stacked_model also needs an argument). My define_stacked_model is constructing a model with concatenate and when I am trying to fit the DeepEnsembleRegressor I get the following error: "Calling
Model.fit
in graph mode is not supported when theModel
instance was constructed with eager mode enabled. Please construct yourModel
instance in graph mode or callModel.fit
with eager mode enabled."Since the eager execution cancelation is not an option, do you have any idea what could solve the first error?
Thanks in advance!