matterport / Mask_RCNN

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Other
24.71k stars 11.71k forks source link

Export to tflite #2020

Open QwertyKamil opened 4 years ago

QwertyKamil commented 4 years ago

Hi,

i wanna use rcnn with tflite and i have problem with convert saved h5 model to tflite model. When i try this

name = "model.h5"
model.keras_model.save(name,True,False)
converter = tf.lite.TFLiteConverter.from_keras_model_file(model_file=name, custom_objects={'BatchNorm':modellib.BatchNorm,'ProposalLayer': modellib.ProposalLayer})

i get error

TypeError: __init__() missing 2 required positional arguments: 'proposal_count' and 'nms_threshold'

What am I doing wrong?

iidashu commented 4 years ago

I am trying to convert a full model to tflite and just came across the same error when loading the model.

model.keras_model.save(model_file_path, include_optimizer=False)

tf.keras.models.load_model(filepath=model_file_path,
                           custom_objects={'tf': tf,'BatchNorm':modellib.BatchNorm,'ProposalLayer':  modellib.ProposalLayer})

then

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-38-e526d8365af0> in <module>()
     16 tf.keras.models.load_model(filepath=model_file_path,
     17                            custom_objects={'tf': tf,'BatchNorm':modellib.BatchNorm,'ProposalLayer':
---> 18     modellib.ProposalLayer})

9 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in from_config(cls, config)
   1101             A layer instance.
   1102         """
-> 1103         return cls(**config)
   1104 
   1105     def count_params(self):

TypeError: __init__() missing 2 required positional arguments: 'proposal_count' and 'nms_threshold'
suchiz commented 4 years ago

Has anyone of you succeeded ?

iidashu commented 4 years ago

@suchiz I could export model to savedModel format (not tflite) with https://github.com/bendangnuksung/mrcnn_serving_ready

Now, I am checking if the savedModel be converted to tflite format (after I checked my savedModel is not broken).

suchiz commented 4 years ago

@iidashu Yeah, I used this to convert h5 to pb aswell, but now Im stuck at converting it to tflite too :). Thanks for the concern !

suchiz commented 4 years ago

Hey there, I finally succeeded to convert it, i am now working on the android adaptation... 1024x1024x3 images are too big to allocate on mobile...? Weird error :(.

tflite_convert --output_file=mask-rcnn-model.tflite --output_format=TFLITE --graph_def_file=saved_model\saved_model.pb --input_arrays=input_image --output_arrays=mrcnn_class/Softmax,mrcnn_bbox/Reshape --input_shapes=1,1024,1024,3 --enable_select_tf_ops --allow_custom_ops

with tensorflow 1.13.1

ZhongOO commented 4 years ago

Hey there, I finally succeeded to convert it, i am now working on the android adaptation... 1024x1024x3 images are too big to allocate on mobile...? Weird error :(.

tflite_convert --output_file=mask-rcnn-model.tflite --output_format=TFLITE --graph_def_file=saved_model\saved_model.pb --input_arrays=input_image --output_arrays=mrcnn_class/Softmax,mrcnn_bbox/Reshape --input_shapes=1,1024,1024,3 --enable_select_tf_ops --allow_custom_ops

with tensorflow 1.13.1

Hi, can you plese share the way to convert it? I am stuck at converting it to tflite too. Thank you!

bmabir17 commented 4 years ago

This is how i was able to convert it https://gist.github.com/bmabir17/754a6e0450ec4fd5e25e462af949cde6

did any of you tried to run inference on it using python interpreter or android?

ZhongOO commented 4 years ago

@suchiz can you please share the way to use it in android?

bmabir17 commented 4 years ago

@ZhongOO i am still working on it. Will share the update as soon as i am finished.

ZhongOO commented 4 years ago

I tried to use tflite in Android,but failed. it seems some ops in tensorflow are not supported in tensorflow lite. 

--------------原始邮件-------------- 发件人:"B M Abir "<notifications@github.com>; 发送时间:2020年4月21日(星期二) 凌晨3:15 收件人:"matterport/Mask_RCNN" <Mask_RCNN@noreply.github.com>; 抄送:"ZhongOO "<AuroraZhong@bupt.edu.cn>;"Mention "<mention@noreply.github.com>; 主题:Re: [matterport/Mask_RCNN] Export to tflite (#2020)

@ZhongOO i am still working on it. Will share the updated as soon as i am finished.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

ZhongOO commented 4 years ago

@bmabir17 I had use the same way you offer above to convert the model to tflite (model->saved model->tflite), but I got a little confuse about output_arrays when convert .pb to tflite. And when I tried mask rcnn tflite in pycharm, I got error in interpreter.invoke() .

bmabir17 commented 4 years ago

@bmabir17 I had use the same way you offer above to convert the model to tflite (model->saved model->tflite), but I got a little confuse about output_arrays when convert .pb to tflite.

output_arrays actually is the name of the output layers in mask rcnn network. you can use print(model.keras_model.summary()) to print your model structure and will find the name of the last two output layers

And when I tried mask rcnn tflite in pycharm, I got error in interpreter.invoke() .

Yes, its a known issue tflite python interpreter does not seems to have the SELECT_TF_OPS support(Under development) , although the TOCO converter does have that option.

I tried to use tflite in Android,but failed. it seems some ops in tensorflow are not supported in tensorflow lite.

Android (java) interpreter also seems to have this issue but they have released a seperate tflite source package to address this. I just found it out yesterday. will test it and see if it works or not

ZhongOO commented 4 years ago

@bmabir17 thank you! I just remembered that when I use the tflite I got the error that op CropAndResize is not supported in tensorflow lite. Did you come across the same issue?

bmabir17 commented 4 years ago

@ZhongOO yes, the tflite does not support the following tf_ops ResizeNearestNeighbor, Stack, and TensorFlowShape which is used in mask-rcnn

ZhongOO commented 4 years ago

@bmabir17 did you solve the issue?

suchiz commented 4 years ago

@ZhongOO So sorry for the late answer. My country has been locked down due of the coronavirus. And I am using github for work purpose only so I usually don't check my emails or connect on github.

I used neutron to checkout the output of the graph then converted it with the line i gave you: tflite_convert --output_file=mask-rcnn-model.tflite --output_format=TFLITE --graph_def_file=saved_model\saved_model.pb --input_arrays=input_image --output_arrays=mrcnn_class/Softmax,mrcnn_bbox/Reshape --input_shapes=1,1024,1024,3 --enable_select_tf_ops --allow_custom_ops

Then to use it on android, i based my code on the object detection example from tflite directly. They are using a real time CNN with the rectangle and everything. So I didn't look on how to use mask output. I just wanted to test Mask RCNN by using the bounding boxes output only. But during the runtime i got an exception error from android: Cannot allocate bitmap 102410243.

That is where im struck. So I didn't really ran it on android yet... Sorry :/

ZhongOO commented 4 years ago

@suchiz yeah, bitmap 102410243 is too big to allocate on android. Have you tried to set the input_shapes smaller? for example 1,224,224,3 ?

suchiz commented 4 years ago

@ZhongOO No because it would mean that I have to train again my model from scratch, and I don't want that, I don't have time for that... I spent too much time on fine tuning this one. So I changed my objective: Instead of having a 100% independant mobile app. I use a server for the computation, and the mobile app to send datas ...

bmabir17 commented 4 years ago

@suchiz i may have a solution for that. while conversion if you set the input_shapes like the following it works (no retraining required) I have inspected my model using netron it shows the input shape have indeed changed. I have converted a model like this with no error.

    input_arrays = ["input_image"]
    output_arrays = ["mrcnn_class/Softmax","mrcnn_bbox/Reshape"]
    converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
        PATH_TO_SAVE_FROZEN_PB+"/"+FROZEN_NAME,
        input_arrays, output_arrays,
        input_shapes={"input_image":[1,256,256,3]}
        )
    # tf.enable_control_flow_v2()

    converter.experimental_new_converter = True
    converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS]
    tflite_model = converter.convert()

But i am unble to run inference on it with python interpreter(for all my converted models). It throws a segmentation error. I have found no solution why it happens

suchiz commented 4 years ago

@bmabir17 Yeah, this would generate a model with a 256-256-3 I put but my weights are for 1024-1024-3 only

bmabir17 commented 4 years ago

@suchiz hmm, I tried to run the tflite model but encountered a different error. It seems like an internal library error. So i have created an issue on tensorflow repo.

prakhar471 commented 4 years ago

@bmabir17 i have tried your code but facing this error "Invalid tensors 'input_image' were found" do you have any idea how this can be solved

bmabir17 commented 4 years ago

@bmabir17 i have tried your code but facing this error "Invalid tensors 'input_image' were found" do you have any idea how this can be solved

Are you talking about the conversion code? Can you specify which line its thrown from? The error message seems weird, why would there be an error if 'input_image' were found?

prakhar471 commented 4 years ago

I have put the whole code

in keras_to_tflite(in_weight_file, out_weight_file) 74 PATH_TO_SAVE_FROZEN_PB+"/"+FROZEN_NAME, 75 input_arrays, output_arrays, ---> 76 input_shapes={"input_image":[1,256,256,3]} 77 ) 78 converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS] /tensorflow-1.15.2/python3.6/tensorflow_core/python/util/deprecation.py in new_func(*args, **kwargs) 322 'in a future version' if date is None else ('after %s' % date), 323 instructions) --> 324 return func(*args, **kwargs) 325 return tf_decorator.make_decorator( 326 func, new_func, 'deprecated', /tensorflow-1.15.2/python3.6/tensorflow_core/lite/python/lite.py in from_frozen_graph(cls, graph_def_file, input_arrays, output_arrays, input_shapes) 1059 """Creates a TocoConverter class from a file containing a frozen graph.""" 1060 return TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, -> 1061 output_arrays, input_shapes) 1062 1063 @classmethod /tensorflow-1.15.2/python3.6/tensorflow_core/lite/python/lite.py in from_frozen_graph(cls, graph_def_file, input_arrays, output_arrays, input_shapes) 703 # Get input and output tensors. 704 input_tensors = _get_tensors_from_tensor_names( --> 705 sess.graph, input_arrays) 706 output_tensors = _get_tensors_from_tensor_names( 707 sess.graph, output_arrays) /tensorflow-1.15.2/python3.6/tensorflow_core/lite/python/util.py in get_tensors_from_tensor_names(graph, tensor_names) 120 if invalid_tensors: 121 raise ValueError("Invalid tensors '{}' were found.".format( --> 122 ",".join(invalid_tensors))) 123 return tensors 124 ValueError: Invalid tensors 'input_image' were found. Also getting this error Exception: Placeholder input_anchors should be specied by input_arrays.
TimSmole commented 4 years ago

@prakhar471 Use this command to convert saved_model.pb into mask-rcnn-model.tflite:

tflite_convert \
--graph_def_file=./saved_model.pb \
--output_file=./mask-rcnn-model.tflite \
--output_format=TFLITE  \
--input_arrays=input_image,input_anchors,input_image_meta \
--input_shapes=1,1024,1024,3:1,261888,4:1,22 \
--output_arrays=mrcnn_class/Softmax,mrcnn_bbox/Reshape \
--enable_select_tf_ops \
--allow_custom_ops

But keep in mind that you might need to change the input_shapes parameter so that it matches the shape of your images (config.IMAGE_SHAPE) and anchors shape (which you can get with model.get_anchors(config.IMAGE_SHAPE).shape).

bmabir17 commented 4 years ago

@prakhar471 Use this command to convert saved_model.pb into mask-rcnn-model.tflite:

tflite_convert \
--graph_def_file=./saved_model.pb \
--output_file=./mask-rcnn-model.tflite \
--output_format=TFLITE  \
--input_arrays=input_image,input_anchors,input_image_meta \
--input_shapes=1,1024,1024,3:1,261888,4:1,22 \
--output_arrays=mrcnn_class/Softmax,mrcnn_bbox/Reshape \
--enable_select_tf_ops \
--allow_custom_ops

@TimSmole but this command gives the following error

usage: tflite_convert [-h] --output_file OUTPUT_FILE
[--saved_model_dir SAVED_MODEL_DIR | --keras_model_file KERAS_MODEL_FILE]
[--enable_v1_converter] [--experimental_new_converter]
tflite_convert: error: one of the arguments --saved_model_dir --keras_model_file is required

did you use tensorflow 2.2?

TimSmole commented 4 years ago

@bmabir17 Yes, sorry. I should made that clearer. I used this pull request with tensorflow 2.2.

Also I should add that after exporting it to .tflite I haven't had any success invoking it. I hope someone else will come up with the solution.

bmabir17 commented 4 years ago

@TimSmole Thank you for the clarification.

--input_shapes=1,1024,1024,3:1,261888,4:1,22 \

Can you please tell me how you came up with this? I thought the input shape was supposed to be 1,1024,1024,3

TimSmole commented 4 years ago

@bmabir17 If you inspect model.py file, you will see that in the inference mode the model has three inputs (link) - [input_image, input_image_meta, input_anchors] (hence --input_arrays parameter to the script). The tflite_convert script expects you to provide shapes for all inputs separated with colon (see tflite_convert --help).

bmabir17 commented 4 years ago

@TimSmole Thank you for the info :smile:

Jainam0 commented 4 years ago

@prakhar471 Use this command to convert saved_model.pb into mask-rcnn-model.tflite:

tflite_convert \
--graph_def_file=./saved_model.pb \
--output_file=./mask-rcnn-model.tflite \
--output_format=TFLITE  \
--input_arrays=input_image,input_anchors,input_image_meta \
--input_shapes=1,1024,1024,3:1,261888,4:1,22 \
--output_arrays=mrcnn_class/Softmax,mrcnn_bbox/Reshape \
--enable_select_tf_ops \
--allow_custom_ops

@TimSmole but this command gives the following error

usage: tflite_convert [-h] --output_file OUTPUT_FILE
                      [--saved_model_dir SAVED_MODEL_DIR | --keras_model_file KERAS_MODEL_FILE]
                      [--enable_v1_converter] [--experimental_new_converter]
tflite_convert: error: one of the arguments --saved_model_dir --keras_model_file is required

did you use tensorflow 2.2?

usage: tflite_convert [-h] --output_file OUTPUT_FILE [--saved_model_dir SAVED_MODEL_DIR | --keras_model_file KERAS_MODEL_FILE] [--enable_v1_converter] [--experimental_new_converter] tflite_convert: error: one of the arguments --saved_model_dir --keras_model_file is required

with tensorflow 2.2 any solution ? i have refered this link : https://github.com/TannerGilbert/Tensorflow-Object-Detection-API-train-custom-Mask-R-CNN-model/blob/master/Tensorflow_Object_Detection_API_Instance_Segmentation_in_Google_Colab.ipynb to train model..

Tubhalooter commented 2 years ago

anyone manage to get it to convert to tf lite , ive tried about 4 different ways all wich have run into error , im getting this error using @TimSmole 's method

2022-02-06 20:43:04.545474: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Traceback (most recent call last): File "/usr/local/bin/tflite_convert", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/tflite_convert.py", line 697, in main app.run(main=run_main, argv=sys.argv[:1]) File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 312, in run _run_main(main, args) File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/tflite_convert.py", line 680, in run_main _convert_tf2_model(tflite_flags) File "/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/tflite_convert.py", line 284, in _convert_tf2_model tags=_parse_set(flags.saved_model_tag_set)) File "/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/lite.py", line 1605, in from_saved_model saved_model = _load(saved_model_dir, tags) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load.py", line 900, in load result = load_internal(export_dir, tags, options)["root"] File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load.py", line 958, in load_internal root = load_v1_in_v2.load(export_dir, tags) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 286, in load result = loader.load(tags=tags) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 211, in load meta_graph_def = self.get_meta_graph_def_from_tags(tags) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 91, in get_meta_graph_def_from_tags tags) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 401, in get_meta_graph_def_from_tags f"MetaGraphDef associated with tags {str(tags).strip('[]')} " RuntimeError: MetaGraphDef associated with tags {'serve'} could not be found in SavedModel, with available tags '[set()]'. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI:saved_model_cli.

rida-rida commented 2 years ago

@bmabir17 @suchiz Pl let me know how to resolve this issue after converting it into tflite model. I am getting this error "ValueError: Didn't find custom op for name 'CropAndResize' with version 1 Registration failed."