Closed AI-P-K closed 3 years ago
@AI-P-K , please check this content on model optimization(link) for Mobile & IOT devices.
Do you mean you are using this model from TF Hub, or you are training the model from scratch?
If you're training the model from scratch, then you have more opportunity to modify the model architecture itself to meet your performance needs. In this case, the TensorFlow Forum is a good place to ask for advice from others.
If, however, you are using an existing SavedModel, then your options are more limited. Converting it to a TFLite format is a great option if you want to deploy it to a device for inference. This can be done using the TensorFlow Lite converter. For more information on using SavedModels, including with TensorFlow Serving, refer to the TF guide.
Unfortunately I am not familiar with using SavedModels with OpenCV, although I did find this guide. I expect the maintainers of the OpenCV project will be able to best advise you in this area.
Do you have any questions specifically related to TF Hub?
Closing this. Please reopen if needed.
Hi,
I have trained mask_rcnn_inception_resnet_v2_1024x1024_coco17 with TF2 Object Detection API. I have then exported the model into a saved_model.pb and performed inference with the inference_from_saved_model_tf2_colab.ipynb. It worked like a charm. The problem comes when i actually want to optimize this model for a faster inference either by freezing it and using it with OpenCV dnn or by any other method like TFLITE, TensorRT, OpenVINO and so on. Is there any support you guys can offer for this problem? How are we supposed to make our trained models perform a faster inference? Is it even possible? I have tried every method out there and even if sometimes i manage to freeze the model i can't eventually use it with OpenCV dnn module..