Open perceptualJonathan opened 3 years ago
Hi,
did get the same error apparently. I got an error message:
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements 'get_config'and 'from_config' when saving. In addition, please use the 'custom_objects' arg when calling 'load_model()'.
I used the StructuredDataClassifier. Apparently the custom metric is not saved with the model.
@garyee Would you paste your code? I am trying to reproduce the issue. We actually did save the custom metric.
https://colab.research.google.com/drive/1wmQx004H-a1QLsOmhmJcw4f4DubjC7qT?usp=sharing
If you uncomment the two parameters in the StructuredDataClassifier definition and run the cell you will get the error. This might be an error I made.
Hello, having the same issue in here. Any solution to it? Thanks
I have the same problem.
I think that this line should be changed from this: https://github.com/keras-team/autokeras/blob/0d22c6f6a611cdfc017a24c28c68e7925b7f7feb/autokeras/engine/tuner.py#L63
to something like this:
model = tf.keras.models.load_model(self.best_model_path, custom_objects={"custom_metric": custom_metric})
when there is a custom metric provided by the user.
(Source: https://github.com/tensorflow/tensorflow/issues/33648#issuecomment-594908246)
Hi, i can confirm this issue is still present in the latest Autokeras version: v1.0.12. In my case i found out when i wanted to to get the best model after using a StructuredDataClassifier (using custom metrics loss function) with:
# get the best performing model
best_model = reg.export_model()
Which ended up with the following error:
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements 'get_config'and 'from_config' when saving. In addition, please use the 'custom_objects' arg when calling 'load_model()'.
I managed to fix this issue with the solution of @KeikiHekili mentioned earlier (the changes in autokeras/auto_model.py
and autokeras/autokeras/engine/tuner.py
) and also changing the following in autokeras/auto_model.py
Original:
def export_model(self):
"""Export the best Keras Model.
# Returns
tf.keras.Model instance. The best model found during the search, loaded
with trained weights.
"""
return self.tuner.get_best_model()
Changed:
def export_model(self, custom_objects={}):
"""Export the best Keras Model.
# Returns
tf.keras.Model instance. The best model found during the search, loaded
with trained weights.
"""
if custom_objects:
return self.tuner.get_best_model(custom_objects=custom_objects)
else:
return self.tuner.get_best_model()
After this change i could use a custom metric as follow:
import kerastuner
reg = ak.StructuredDataRegressor(max_trials=3, overwrite=True, metrics=[spearman_rankcor],objective=kerastuner.Objective('spearman_rankcor', direction='max'))
# Feed the structured data regressor with training data.
reg.fit(training_data[feature_names], training_data[TARGET_NAME], epochs=10)
I performed the model export and saving with:
# get the best performing model
best_model = reg.export_model(custom_objects={'spearman_rankcor': spearman_rankcor})
# summarize the loaded model
best_model.summary()
# Now save the model with round number
logging.info("saving model: %s", MODEL_FILE)
best_model.save(MODEL_FILE)
I still need to validate the actual model and if custom metric is used properly but so far the logging output looks ok.
I also notice this problem with the evaluation method. So I tried to solve it the same way you did. But this error returns:
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1323 test_function *
return step_function(self, iterator)
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1314 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1285 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica
return fn(*args, **kwargs)
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1309 run_step **
with ops.control_dependencies(_minimum_control_deps(outputs)):
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:2888 _minimum_control_deps
outputs = nest.flatten(outputs, expand_composites=True)
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/util/nest.py:416 flatten
return _pywrap_utils.Flatten(structure, expand_composites)
TypeError: '<' not supported between instances of 'function' and 'str'
I also notice this problem with the evaluation method. So I tried to solve it the same way you did. But this error returns:
/home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1323 test_function * return step_function(self, iterator) /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1314 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1285 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica return fn(*args, **kwargs) /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1309 run_step ** with ops.control_dependencies(_minimum_control_deps(outputs)): /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:2888 _minimum_control_deps outputs = nest.flatten(outputs, expand_composites=True) /home/riccardo/Desktop/venv/lib/python3.8/site-packages/tensorflow/python/util/nest.py:416 flatten return _pywrap_utils.Flatten(structure, expand_composites) TypeError: '<' not supported between instances of 'function' and 'str'
This problem is still going on. This post https://stackoverflow.com/questions/65549053/typeerror-not-supported-between-instances-of-function-and-str says that it can be fixed by compiling the model, but automodel doesnt have direct access to the loss and optimizer. I wonder whether compiling without it can be fixed. I'm going to try it out. It would be great if this issue could be fixed, it's been going on a long time. Either that, or maybe remove from the documentation that custom metrics can be used, because it makes it look like it's simple to use but at the moment it is not
I can confirm that the combination of the other edits suggested in this issue, with the addition of compiling the model with the custom metric in evaluate solves the issue. For me it works with the following version of autoModel.evaluate:
def evaluate(self, x, y=None, batch_size=32, verbose=1, custom_objects={},**kwargs):
"""Evaluate the best model for the given data.
# Arguments
x: Any allowed types according to the input node. Testing data.
y: Any allowed types according to the head. Testing targets.
Defaults to None.
batch_size: Number of samples per batch.
If unspecified, batch_size will default to 32.
verbose: Verbosity mode. 0 = silent, 1 = progress bar.
Controls the verbosity of
[keras.Model.evaluate](http://tensorflow.org/api_docs/python/tf/keras/Model#evaluate)
**kwargs: Any arguments supported by keras.Model.evaluate.
# Returns
Scalar test loss (if the model has a single output and no metrics) or
list of scalars (if the model has multiple outputs and/or metrics).
The attribute model.metrics_names will give you the display labels for
the scalar outputs.
"""
self._check_data_format((x, y))
if isinstance(x, tf.data.Dataset):
dataset = x
x = dataset.map(lambda x, y: x)
y = dataset.map(lambda x, y: y)
x = self._adapt(x, self.inputs, batch_size)
y = self._adapt(y, self._heads, batch_size)
dataset = tf.data.Dataset.zip((x, y))
pipeline = self.tuner.get_best_pipeline()
dataset = pipeline.transform(dataset)
if custom_objects:
model = self.tuner.get_best_model(custom_objects=custom_objects)
# only gets metrics from custom_objects for now
model.compile(metrics=[val for key,val in custom_objects.items()])
else:
model = self.tuner.get_best_model()
return utils.evaluate_with_adaptive_batch_size(
model=model, batch_size=batch_size, x=dataset, verbose=verbose, **kwargs
I compile the model with only the metrics provided in custom objects.
Bug Description
Bug Reproduction
Code for reproducing the bug:
Data used by the code: Breast Cancer Dataset
Expected Behavior
Setup Details
Include the details about the versions of:
Additional context
I have come up with a solution to this problem by editing a couple of the autokeras files. In
autokeras/auto_model.py
, I changed thepredict()
function to beThen in
autokeras/engine/tuner.py
, I changed theget_best_model()
function to beLastly, in the above code that I used, I replace
predictedClasses = clf.predict(xTest)
withpredictedClasses = clf.predict(xTest, custom_objects={'matthewsCorrelation': matthewsCorrelation})
. With these changes made, everything runs as I would expect.