keras-team / keras-cv

Industry-strength Computer Vision workflows with Keras
Other
1.01k stars 330 forks source link

No default signature after converting to tflite #1948

Open nicolawern opened 1 year ago

nicolawern commented 1 year ago

Hello, I ran the object_detection_cv in google collab and then tried to convert to tflite. The conversion succeeds but there are no signatures. Inference requires at least the default signature, which I am currently working on adding manually.

Code to reproduce in collab https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_cv/object_detection_keras_cv.ipynb:

'tensorflow 2.12.0', 'keras 2.12.0', 'keras-core 0.1.0', 'keras-cv 0.5.0',

Change keras_cv install to below to avoid keras_core issue

!!pip install keras_cv==0.5.0

After model.fit() add

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()

# Print the signatures from the converted model
interpreter = tf.lite.Interpreter(model_content=tflite_model)

signatures = interpreter.get_signature_list()
print("signatures", signatures)

I saw here: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/signatures.ipynb#scrollTo=71f29229 Keras model converter API uses the default signature automatically - so I expected a signature.

Currently following the above notebook to add them to savedModel format, please let me know if there is a fix for this or a better way to add the signatures.

nicolawern commented 1 year ago

Any workarounds that come to mind?

Couldn't add the signature to the saved model in the correct format. Thanks!

ianstenbit commented 1 year ago

Hi @nicolawern -- the repro script you linked to is our object detection guide, which doesn't include tflite export -- could you create a standalone minimal repro that I can take a look at?

nicolawern commented 1 year ago

Hi @ianstenbit, just to confirm - do you intend to support conversion to tflite when using kerascv?

nicolawern commented 1 year ago

Asking because I assumed KerasCV models would work like keras models and could be converted to tflite using the tf.lite.TFLiteConverter.from_keras_model, but I want to make sure this assumption is true before I make a repo.

jbischof commented 1 year ago

Yes @nicolawern this is true.

nicolawern commented 1 year ago

Ok here is the repo: https://github.com/nicolawern/kerascv_to_tflite I've been running with python 3.9

I thought I found a workaround yesterday by using keras.save to save as a tf model and then load using tf.savedmodel, but even though the default signature appears when I convert, it doesn't load correctly for inference.

Let me know if there is anything else.

ianstenbit commented 1 year ago

Interesting -- are you using the latest version of keras_cv?

I am not able to reproduce the error you're seeing. Take a look at this colab I was testing in. If you run this code in your environment do you still not see any model signatures?

Sorry for the delay 😅

nicolawern commented 1 year ago

I am using the latest version.

I do see the model signatures there, but when I add

print(interpreter.get_input_details()[0]['shape'])

I get [1 1 1 3], when I would expect [640, 640, 3]. Do you get this output as well? This is the signature I get when I use keras.save as tf (example in the repo)

I tried running my tflite interpreter with this default signature and it returns a lot of negative numbers

nicolawern commented 1 year ago

And no worries, thanks for getting back to me!

ianstenbit commented 1 year ago

I'm seeing 'shape_signature': array([-1, -1, -1, 3] in the input details which I think means that this accepts dynamic shapes for those inputs (which is what we'd expect).

Seeing lots of negative numbers as the outputs is expected, since the outputs are not yet decoded. I'm not exactly sure how we'd need to update this model to include box decoding + NMS in the TFLite model, as I'm not an expert on TFLite

nicolawern commented 1 year ago

Ok got it thanks. Do you know why model.outputs is different from what model.predict returns?

model.outputs [<KerasTensor: shape=(None, None, 4) dtype=float32 (created by layer 'box')>, <KerasTensor: shape=(None, None, 20) dtype=float32 (created by layer 'classification')>]

model.predict

dict_keys(['boxes', 'confidence', 'classes', 'num_detections'])

Curious because I converted https://keras.io/examples/vision/retinanet/#implementing-anchor-generator to tflite and it is able to do inference. I noticed that the model.outputs here are the same as output from model.predict

ianstenbit commented 1 year ago

This is because in KerasCV we've separated box decoding from the default forward pass of the model. (See some discussion in #1902 about this)

nicolawern commented 1 year ago

Ok thank you.