OlafenwaMoses / ImageAI

A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
https://www.genxr.co/#products
MIT License
8.57k stars 2.19k forks source link

Converting model into tflite #162

Open dhaval455 opened 5 years ago

dhaval455 commented 5 years ago

Hello, you have done a great job. Thanks for that. Now problem is that I am trying to convert the model into the .tflite model. But I got some error like Traceback (most recent call last): File "convert.py", line 5, in converter = tf.lite.TFLiteConverter.from_keras_model_file("resnet50.h5") File "/usr/local/lib/python3.5/dist-packages/tensorflow/lite/python/lite.py", line 404, in from_keras_model_file keras_model = _keras.models.load_model(model_file) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 216, in load_model custom_objects=custom_objects) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config return deserialize(config, custom_objects=custom_objects) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/layers/serialization.py", line 69, in deserialize printable_module_name='layer') File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object list(custom_objects.items()))) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/network.py", line 1225, in from_config process_layer(layer_data) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/network.py", line 1209, in process_layer layer = deserialize_layer(layer_data, custom_objects=custom_objects) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/layers/serialization.py", line 69, in deserialize printable_module_name='layer') File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/utils/generic_utils.py", line 194, in deserialize_keras_object return cls.from_config(cls_config) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 415, in from_config return cls(config) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/layers/normalization.py", line 153, in init name=name, trainable=trainable, kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/checkpointable/base.py", line 456, in _method_wrapper method(self, *args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 130, in init raise TypeError('Keyword argument not understood:', kwarg) TypeError: ('Keyword argument not understood:', 'freeze')

I am loading and saving model like:

from imageai.Detection import ObjectDetection import os execution_path = os.getcwd() detector = ObjectDetection() detector.setModelTypeAsRetinaNet() detector.setModelPath( os.path.join(execution_path , "weights/resnet50_coco_best_v2.0.1.h5")) a = detector.loadModel() a.save("resnet50.h5")

and then I am trying to convert saved .h5 model into .tflite and I got error like as above.If anyone help.

OlafenwaMoses commented 5 years ago

Support for converting models will be added very soon.

OlafenwaMoses commented 5 years ago

See this article for latest update on this.

https://towardsdatascience.com/the-story-and-future-of-imageai-one-year-anniversary-e63c80f527c8

tomups commented 5 years ago

Any update on this? I'm trying to convert a YoloV3 object detection model to TFLite with tflite_convert, but getting this error:

$ tflite_convert --output_file=mobile.tflite --keras_model_file=detection_model-ex-013--loss-0007.917.h5

WARNING:tensorflow:From c:\python35\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-09-09 17:33:22.199636: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-09-09 17:33:22.369699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.97GiB
2019-09-09 17:33:22.370182: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-09 17:33:22.779092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-09 17:33:22.779240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-09-09 17:33:22.779296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-09-09 17:33:22.779514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4716 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
WARNING:tensorflow:From c:\python35\lib\site-packages\tensorflow\lite\python\lite.py:591: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From c:\python35\lib\site-packages\tensorflow\python\framework\graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
Traceback (most recent call last):
  File "c:\python35\lib\runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\python35\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Python35\Scripts\tflite_convert.exe\__main__.py", line 9, in <module>
  File "c:\python35\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 442, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "c:\python35\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "c:\python35\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 438, in run_main
    _convert_model(tflite_flags)
  File "c:\python35\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 191, in _convert_model
    output_data = converter.convert()
  File "c:\python35\lib\site-packages\tensorflow\lite\python\lite.py", line 411, in convert
    "invalid shape '{1}'.".format(_tensor_name(tensor), shape_list))
ValueError: None is only supported in the 1st dimension. Tensor 'input_1' has invalid shape '[None, None, None, 3]'.

Seems the model is missing some information, like network input and output size I think?

tomups commented 5 years ago

I finally managed to convert to TFLite by using these settings:

$ tflite_convert --output_file=mobile.tflite --keras_model_file=detection_model-ex-013--loss-0007.917.h5 --input_arrays=input_1 --input_shapes=1,416,416,3

Now let's see if I can manage to get it to run in Android with visible output!

ola0x commented 5 years ago

I had similar issue when converting my trained inception model to .tflite ERROR MESSAGE

File "C:\Users\hp\Documents\ml\New Model\CustomModel.py", line 13, in predictor.save_model_to_tensorflow(new_model_folder= os.path.join(execution_path, "tensorflow_model"), new_model_name="idenprof_resnet_tensorflow.pb") File "C:\Program Files\Python37\lib\site-packages\imageai\Prediction\Custominit.py", line 589, in save_model_to_tensorflow main_graph = graph_util.convert_variables_to_constants(sess, init_graph, out_nodes) File "C:\Program Files\Python37\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func return func(*args, **kwargs) File "C:\Program Files\Python37\lib\site-packages\tensorflow\python\framework\graph_util_impl.py", line 297, in convert_variables_to_constants source_op_name = get_input_name(node) File "C:\Program Files\Python37\lib\site-packages\tensorflow\python\framework\graph_util_impl.py", line 254, in get_input_name raise ValueError("Tensor name '{0}' is invalid.".format(node.input[0])) ValueError: Tensor name 'batch_normalization/cond/ReadVariableOp/Switch:1' is invalid.

nekapoor commented 4 years ago

I finally managed to convert to TFLite by using these settings:

$ tflite_convert --output_file=mobile.tflite --keras_model_file=detection_model-ex-013--loss-0007.917.h5 --input_arrays=input_1 --input_shapes=1,416,416,3

Now let's see if I can manage to get it to run in Android with visible output!

@tomtastico were you able to get this working after generating your imageai model? And you know that 'detection_config.json' file that is generated with imageai...is that not needed when you do this conversion? Thanks so much

tomups commented 4 years ago

@nekapoor Was long time ago so I can't fully recall, sorry, but I think I didn't manage to make it work after all, even after converting I didn't find the right settings for the TFLite interpreter. I the end I used Google's AutoML that now allows to export to TFLite for object detection models.