fizyr / keras-retinanet

Keras implementation of RetinaNet object detection.
Apache License 2.0
4.37k stars 1.96k forks source link

OpenVINO: Converting inference model #725

Closed TimoK93 closed 5 years ago

TimoK93 commented 5 years ago

Hi guys,

by chance i fixed the following issue. Let me report to you so you can save much time!

These days i tried to run your model on an Intel GPU based SoC. Therefore i need to convert the model via OpenVINO to an intel supported inference model.

While converting i got an error: [ERROR] Graph contains a cycle. Can not proceed For more information, this issue seems to be similar: https://software.intel.com/en-us/forums/computer-vision/topic/781822

By chance i found this solution: Set the following argument to use Faster RCNN custom operations python3 mo_tf.py --input_model <MODEL_PATH> --tensorflow_use_custom_operations_config <OPENVINO_DIR>/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json

Then OpenVINO converts your model instantly!

Greetings, Timo

vcarpani commented 5 years ago

Hi, we never tried deploying on an Intel GPU based SoC, so we never run into this kind of issues. In my opinion you got that error because a custom layer may contain something that the OpenVINO converters sees as a cycle, and that situation has been patched for faster-rcnn. Anyway, we are interested in analyzing how our models perform on different platforms, so it would be nice if you could share your hardware platform and your results ;)

TimoK93 commented 5 years ago

The plattform is an UpSquared SoC.

I just realized that the converted inference engine has an size of only a few bytes... So my "fix" didnt realy fixed something, it just cheated the converter. I think i have to spend some more time in this issue...

Do you know which layers of retina net are custom layers? Maybe i could source them out

hgaiser commented 5 years ago

There aren't that many custom layers, you can find them here.

TimoK93 commented 5 years ago

Just an update:

I'm in contact with intel regarding this issue. Here is the topic: https://software.intel.com/en-us/comment/1928740

Maxfashko commented 5 years ago

@TimoK93 Did you manage to transform and make inference using openvino?

TimoK93 commented 5 years ago

I stopped working on it. Using OpenVINO with custom layers is very difficult... It is optimized for Google object detection API. The Google Models are really easy to Implement!

Maxfashko commented 5 years ago

@TimoK93 can you show the RetinaNet model link for the Google object detection API. Is it SSD detector with ResNet 101 FPN? https://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_resnet_v1_fpn_feature_extractor_test.py

TimoK93 commented 5 years ago

Sorry, there isnt a retinaNet Model for OpenVINO... Use one of these Models https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow

hgaiser commented 5 years ago

Thanks for the update @TimoK93 , but in that case I'll close this issue since there's not much we can do about it.

TimoK93 commented 5 years ago

Hey guys,

in the latest OpenVINO release:

Added support of the following TensorFlow* topologies: VDCNN, Unet, A3C, DeepSpeech, lm_1b, lpr-net, CRNN, NCF, RetinaNet, DenseNet, ResNext.

Maybe this will help someone! Kind Regards, Timo

Maxfashko commented 5 years ago

@TimoK93 you have old information. It was a long time ago. Recent attempts to make the model retina work in this post: https://software.intel.com/en-us/forums/computer-vision/topic/806219