Open nicolawern opened 1 year ago
Any workarounds that come to mind?
Couldn't add the signature to the saved model in the correct format. Thanks!
Hi @nicolawern -- the repro script you linked to is our object detection guide, which doesn't include tflite
export -- could you create a standalone minimal repro that I can take a look at?
Hi @ianstenbit, just to confirm - do you intend to support conversion to tflite when using kerascv?
Asking because I assumed KerasCV models would work like keras models and could be converted to tflite using the tf.lite.TFLiteConverter.from_keras_model, but I want to make sure this assumption is true before I make a repo.
Yes @nicolawern this is true.
Ok here is the repo: https://github.com/nicolawern/kerascv_to_tflite I've been running with python 3.9
I thought I found a workaround yesterday by using keras.save to save as a tf model and then load using tf.savedmodel, but even though the default signature appears when I convert, it doesn't load correctly for inference.
Let me know if there is anything else.
Interesting -- are you using the latest version of keras_cv
?
I am not able to reproduce the error you're seeing. Take a look at this colab I was testing in. If you run this code in your environment do you still not see any model signatures?
Sorry for the delay 😅
I am using the latest version.
I do see the model signatures there, but when I add
print(interpreter.get_input_details()[0]['shape'])
I get [1 1 1 3], when I would expect [640, 640, 3]. Do you get this output as well? This is the signature I get when I use keras.save as tf (example in the repo)
I tried running my tflite interpreter with this default signature and it returns a lot of negative numbers
And no worries, thanks for getting back to me!
I'm seeing 'shape_signature': array([-1, -1, -1, 3]
in the input details which I think means that this accepts dynamic shapes for those inputs (which is what we'd expect).
Seeing lots of negative numbers as the outputs is expected, since the outputs are not yet decoded. I'm not exactly sure how we'd need to update this model to include box decoding + NMS in the TFLite model, as I'm not an expert on TFLite
Ok got it thanks. Do you know why model.outputs is different from what model.predict returns?
model.outputs
[<KerasTensor: shape=(None, None, 4) dtype=float32 (created by layer 'box')>, <KerasTensor: shape=(None, None, 20) dtype=float32 (created by layer 'classification')>]
model.predict
dict_keys(['boxes', 'confidence', 'classes', 'num_detections'])
Curious because I converted https://keras.io/examples/vision/retinanet/#implementing-anchor-generator to tflite and it is able to do inference. I noticed that the model.outputs here are the same as output from model.predict
This is because in KerasCV we've separated box decoding from the default forward pass of the model. (See some discussion in #1902 about this)
Ok thank you.
Hello, I ran the object_detection_cv in google collab and then tried to convert to tflite. The conversion succeeds but there are no signatures. Inference requires at least the default signature, which I am currently working on adding manually.
Code to reproduce in collab https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_cv/object_detection_keras_cv.ipynb:
'tensorflow 2.12.0', 'keras 2.12.0', 'keras-core 0.1.0', 'keras-cv 0.5.0',
Change keras_cv install to below to avoid keras_core issue
After model.fit() add
I saw here: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/signatures.ipynb#scrollTo=71f29229 Keras model converter API uses the default signature automatically - so I expected a signature.
Currently following the above notebook to add them to savedModel format, please let me know if there is a fix for this or a better way to add the signatures.