google / automl

Google Brain AutoML
Apache License 2.0
6.2k stars 1.44k forks source link

Some errors when converting a SavedModel to a lite format. #229

Closed Ringhu closed 4 years ago

Ringhu commented 4 years ago

Hi, thanks for your contribution. I try to deploy the efficientdet model to a mobile device and meet some problems. When I try to convert the SavedModel to a lite format, I got errors. Here is my code: `converter = tf.lite.TFLiteConverter.from_saved_model('saved_model',signature_keys=['serving_default'])

converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.experimental_new_converter = True converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] tflite_model = converter.convert()

open("saved_model/converted_model.tflite", "wb").write(tflite_model)`

which following the tensorflow lite user guide to do. But the error raised: ` Traceback (most recent call last):

File "transfer_lite.py", line 17, in

tflite_model = converter.convert()

File "/home/hulining/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/lite/python/lite.py", line 446, in convert "invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list))

ValueError: None is only supported in the 1st dimension. Tensor 'image_arrays' has invalid shape '[1, None, None, None]'. `

And I also tried the command line way: tflite_convert --saved_model_dir=/path/to/automl/efficientdet/saved_model/ --output_file=/path/to/automl/efficientdet/lite_model/

There is also the same ValueError.

Could anyone help this?

dl-maxwang commented 4 years ago

I working on it, input shape error solved but I got other issues. I use python script to do the convertion, here is my code:

    graph = tf.Graph()
    with graph.as_default():
        with tf.gfile.GFile(frozen_graph, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
    with tf.Session(graph=graph) as sess:
        tf.import_graph_def(graph_def, name='')
        input_tensor = graph.get_tensor_by_name('image_arrays:0')
        outputs = graph.get_tensor_by_name('detections:0')
        print(np.shape(sess.run([outputs], feed_dict={input_tensor: np.random.random(size=(1, 416, 416, 3))})))
        # explicit define input shapes below
        converter = tf.lite.TFLiteConverter.from_frozen_graph(frozen_graph, input_arrays=['image_arrays'],
                                                              output_arrays=['detections'],
                                                              input_shapes={'image_arrays': [1, 416, 416, 3]})
        converter.optimizations = [tf.lite.Optimize.DEFAULT]
        converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]
        tflite_quant_model = converter.convert()

however I meet with the issue:

Check failed: array.data_type == array.final_data_type Array "image_arrays" has mis-matching actual and final data types (data_type=uint8, final_data_type=float)

I and try to in place cast the image_array to float and see if it could be solved.

dl-maxwang commented 4 years ago

I working on it, input shape error solved but I got other issues. I use python script to do the convertion, here is my code:

    graph = tf.Graph()
    with graph.as_default():
        with tf.gfile.GFile(frozen_graph, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
    with tf.Session(graph=graph) as sess:
        tf.import_graph_def(graph_def, name='')
        input_tensor = graph.get_tensor_by_name('image_arrays:0')
        outputs = graph.get_tensor_by_name('detections:0')
        print(np.shape(sess.run([outputs], feed_dict={input_tensor: np.random.random(size=(1, 416, 416, 3))})))
        # explicit define input shapes below
        converter = tf.lite.TFLiteConverter.from_frozen_graph(frozen_graph, input_arrays=['image_arrays'],
                                                              output_arrays=['detections'],
                                                              input_shapes={'image_arrays': [1, 416, 416, 3]})
        converter.optimizations = [tf.lite.Optimize.DEFAULT]
        converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]
        tflite_quant_model = converter.convert()

however I meet with the issue:

Check failed: array.data_type == array.final_data_type Array "image_arrays" has mis-matching actual and final data types (data_type=uint8, final_data_type=float)

I and try to in place cast the image_array to float and see if it could be solved.

for input I found that ``input_tensor = graph.get_tensor_by_name('convert_image:0')``` could be used as input tensor however I met another issue:

Here is a list of operators for which you will need custom implementations: NonMaxSuppressionV5.

And I am working on how to use custom implementations in lite model

mingxingtan commented 4 years ago

TFLite should work now (see README instructions)

tiru1930 commented 4 years ago

I am still seeing this issue with tf-nightly==2.3.0.dev20200618

 Check failed: array.data_type == array.final_data_type Array "image_arrays" has mis-matching actual and final data types (data_type=uint8, final_data_type=float).
td_tfllite_converter_api_1  | Fatal Python error: Aborted
            logger.info(f"input_name {input_name}")
            logger.info(f"output_arrays {output_arrays}")
            input_shapes = {}
            for i, name in enumerate(input_name):
                input_shapes[name] = job.conversionParameters.modelInputShape[i]
            logger.info(f"input_shapes {input_shapes}")

            converter = tf.lite.TFLiteConverter.from_saved_model(
                iopaths.targetInputModel,
                input_arrays=input_name,
                input_shapes=input_shapes,
                output_arrays=output_arrays,
            )
            converter.experimental_new_converter = False
            converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
            tflite_model = converter.convert()
            tf.io.gfile.GFile(iopaths.targetOutPutModel, "wb").write(tflite_model)
            logger.info(
                f"tf model conversion is completed and savded at {iopaths.targetOutPutModel}"
            )
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:60 - inputs_mapping keys ['image_arrays:0']
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:63 - input names ['image_arrays']
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:66 - output mapping keys ['detections:0']
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:71 - output array ['detections']
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:77 - input_name ['image_arrays']
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:78 - output_arrays ['detections']
td_tfllite_converter_api_1  | 2020-07-22 14:47:28.586 | INFO     | converter.tflite_converter:convert_saved_model:82 - input_shapes {'image_arrays': [1, 512, 512, 3]}