Open kerighan opened 1 year ago
A .keras
model is not meant to be consumed by anything other than keras.models.load_model()
. To use the TF-Lite converter, first create a TF SavedModel via tf.saved_model.save(model)
.
Supporting this might be useful. Usually onnx
is needed to convert to torch model to tf-lite. So supporting this from Keras-Core might be effective.
However, torch.onnx.export
exist.
@innat @fchollet what is the working pipeline to go from the keras model through torch training to the tf-lite export? viz. https://github.com/keras-team/keras-core/issues/746 . unfortunatelly, even onnx export does not work as expected
I'm told the TFLite team have got TFLite export working internally for Keras Core at Google, but the code has not yet been released. Hopefully you'll get an update soon.
Probably the same issue tf2onnx is running into when moving from Keras2 to Keras3.
import tf2onnx
...
onnx_model, _ = tf2onnx.convert.from_keras(model)
# AttributeError: 'Functional' object has no attribute '_get_save_spec'
Hi @kerighan -
By loading .keras
model using keras.saving.load_model
, then try to use TFLite to convert/quantize the model will work fine. Attached gist for your reference.
I get this error when trying to convert/quantize any .keras model using tflite (model was trained using TF):
Traceback: