openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.28k stars 2.27k forks source link

How to use mask_rcnn_demo in OpenVino? #188

Closed ausk closed 5 years ago

ausk commented 5 years ago

Hi, think you for providing such an awesome inference library.

I download and uncompress the mask_rcnn_inception_v2_coco_2018_01_28, then copy mask_rcnn_support*.json into the dir. When I run the command:

python "$mo/mo_tf.py" \
 --input_model "frozen_inference_graph.pb" \
 --tensorflow_object_detection_api_pipeline_config "pipeline.config" \
 --tensorflow_use_custom_operations_config "mask_rcnn_support_api_v1.11.json" 
 -b 1 \
 --data_type=FP32 \
 --reverse_input_channels \

It complains:

[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): The matched sub-graph contains network input node "image_tensor".

I have tried tf1.4, tf1.11, tf1.12, tf1.13, and the mask_rcnn_support.json, mask_rcnn_support_api_v1.7.json, mask_rcnn_support_api_v1.11.json. All failed with such a message.

The full message is :


Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/xxx/intel/m/mask_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
        - Path for generated IR:        /home/xxx/intel/m/mask_rcnn_inception_v2_coco_2018_01_28/.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  /home/xxx/intel/m/mask_rcnn_inception_v2_coco_2018_01_28/pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  /home/xxx/intel/m/mask_rcnn_inception_v2_coco_2018_01_28/mask_rcnn_support_api_v1.11.json
Model Optimizer version:        2019.1.1-83-g28dfbfd
[ WARNING ]
Detected not satisfied dependencies:
        test-generator: installed: 0.1.2, required: 0.1.1

Please install required versions of components or use install_prerequisites script
/home/xxx/intel/openvino_2019.1.144/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (800, 800).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): The matched sub-graph contains network input node "image_tensor".

Can any one tell me what should I do to run the mask_rcnn_demo?

ausk commented 5 years ago

And my openvino environment is ok. The classification_sampleclassification_sample_asyncsecurity_barrier_camera_demo run successfully.

I have set up environment for OpenVino and run the classification_sampleclassification_sample_async and successfully.

D:\openvino\build\intel64\Release\security_barrier_camera_demo.exe -i "D:\openvino\car_1.bmp" ^
 -m "D:\openvino\license32\vehicle-license-plate-detection-barrier-0106.xml" ^
 -m_va "D:\openvino\license32\vehicle-attributes-recognition-barrier-0039.xml" ^
 -m_lpr "D:\openvino\license32\license-plate-recognition-barrier-0001.xml" ^
 -d GPU -d_va GPU -d_lpr GPU

[ INFO ] InferenceEngine:
        API version ............ 1.6
        Build .................. 23780
[ INFO ] Parsing input parameters
[ INFO ] Capturing video streams from the video files or loading images
[ INFO ] Files were added: 1
[ INFO ]     D:\openvino\car_1.bmp
[ INFO ] Number of input image files: 1
[ INFO ] Number of input video files: 0
[ INFO ] Number of input channels: 1
[ INFO ] Display resolution: 1920x1080
[ INFO ] Loading plugin GPU

        API version ............ 1.6
        Build .................. 23780
        Description ....... clDNNPlugin
[ INFO ] Loading network files for VehicleDetection
[ INFO ] Batch size is forced to  1
[ INFO ] Checking Vehicle Detection inputs
[ INFO ] Checking Vehicle Detection outputs
[ INFO ] Loading Vehicle Detection model to the GPU plugin
[ INFO ] Loading network files for VehicleAttribs
[ INFO ] Batch size is forced to 1 for Vehicle Attribs
[ INFO ] Checking VehicleAttribs inputs
[ INFO ] Checking Vehicle Attribs outputs
[ INFO ] Loading Vehicle Attribs model to the GPU plugin
[ INFO ] Loading network files for Licence Plate Recognition (LPR)
[ INFO ] Batch size is forced to  1 for LPR Network
[ INFO ] Checking LPR Network inputs
[ INFO ] Checking LPR Network outputs
[ INFO ] Loading LPR model to the GPU plugin
[ INFO ] Start inference
To close the application, press 'CTRL+C' or any key with focus on the output window

Average inference time: 138.138 ms (7.23914 fps)

Average vehicle detection time: 93.7365 ms (10.6682 fps)

Average vehicle attribs time: 7.09678 ms (140.909 fps)

Average lpr time: 7.53517 ms (132.711 fps)

Total execution time: 25613.7

[ INFO ] Execution successful
shubha-ramani commented 5 years ago

Dear @ausk , Please unzip the attached *.zip file. you will find a mask_rcnn_support_api_v1.13.json

tf_obj_det_jsons.zip

Let me know if it works for you. And sorry for all your troubles.

Sincerely,

Shubha

ausk commented 5 years ago

Dear @shubha-ramani Thank you for your reply in time.


Environment: OpenVino 2019R1.1, Ubuntu 16.04, Python 3.6, Tensorflow (1.11, 1.12, 1.13) mask_rcnn_inception_v2_coco_2018_01_28.tar.gz


Sorry to tell you that if I use mask_rcnn_support_api_v1.13.json from tf_obj_det_jsons.zip instead ofxxx_v1.11.json, it still doesn't work.

the command:

 python "$mo/mo_tf.py"   \
  --data_type=FP32  \
  --reverse_input_channels \
  --input_model "frozen_inference_graph.pb" \
  --tensorflow_object_detection_api_pipeline_config "pipeline.config"  
  --tensorflow_use_custom_operations_config "mask_rcnn_support_api_v1.13.json"

the old message (v1.11):

[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement"
 (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): 
The matched sub-graph contains network input node "image_tensor".

the new message (v1.13):

[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): 
Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

Sincerely,

Jin

shubha-ramani commented 5 years ago

Dearest @ausk OK. I'm going to promise to reproduce this today and give you a workaround soon. I believe you. The issue is that the Tensorflow Object Detection API guys have suddenly changed their models of late and now the *.json in OpenVino don't match up. You are not the first customer to have reported this. Please stay tuned and I will report back here.

Thanks for your patience !

Shubha

shubha-ramani commented 5 years ago

Dearest @ausk I just now tried mask_rcnn_inception_v2_coco_2018_01_28 on OpenVino 2019R1.1 and it works fine. The below is my command:

python "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\mo_tf.py" --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.191\deployment_tools\model_optimizer\extensions\front\tf\mask_rcnn_support.json"

I am not sure what you are doing wrong. Perhaps you forgot to install the pre-requisites under deployment_tools\model_optimizer\install_prerequisites ? Make sure you do that - it's a very important step. Also make sure you are using Tensorflow 1.12, don't go as high as 1.13. Do a "pip show Tensorflow" and see what you got. Model Optimizer does not yet support TF 1.13.

Here's what i have (Windows 10, though OS makes absolutely zero difference):

C:\Users\sdramani\Downloads>pip show tensorflow Name: tensorflow Version: 1.12.0 Summary: TensorFlow is an open source machine learning framework for everyone. Home-page: https://www.tensorflow.org/ Author: Google Inc. Author-email: opensource@google.com License: Apache 2.0 Location: c:\users\sdramani\appdata\local\programs\python\python36\lib\site-packages Requires: termcolor, protobuf, numpy, keras-applications, tensorboard, grpcio, keras-preprocessing, six, gast, wheel, absl-py, astor Required-by:

C:\Users\sdramani\Downloads>

Also I grabbed the model from Model Optimizer TF Supported Models List and it seems like you did that also http://download.tensorflow.org/models/object_detection/mask_rcnn_inception_v2_coco_2018_01_28.tar.gz Please report back here and let me know what you discover.

Thanks,

Shubha

shubha-ramani commented 5 years ago

Also Dear @ausk , In your version of the command you are using quotes. Please avoid them. Note that I'm not doing that in my command. You're doing this:

python "$mo/mo_tf.py" \ --input_model "frozen_inference_graph.pb" \ --tensorflow_object_detection_api_pipeline_config "pipeline.config" \ --tensorflow_use_custom_operations_config "mask_rcnn_support_api_v1.11.json"

And I don't use those quotes. Actually our documentation does not specify to use quotes and who knows ? That could be causing you a lot of heartache !

Shubha

ausk commented 5 years ago

Dearest @shubha-ramani , thank you for your patience and time.

I try to recreate the environments on Win10 and Ubuntu 16.04 with Python 3.6 and tensorflow1.12 and other required libraries. Here I also show on win10:

It works when mo the squeezenet1.1.caffemodel:

D:\openvino\tfdetection>python D:\Programs\Intel\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\mo.py --input_model D:/openvino/squeezenet1.1.caffemodel --output_dir D:/openvino/ --data_type FP32
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      D:/openvino/squeezenet1.1.caffemodel
        - Path for generated IR:        D:/openvino/
        - IR output name:       squeezenet1.1
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
Caffe specific parameters:
        - Enable resnet optimization:   True
        - Path to the Input prototxt:   D:/openvino/squeezenet1.1.prototxt
        - Path to CustomLayersMapping.xml:      D:\Programs\Intel\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\extensions\front\caffe\CustomLayersMapping.xml
        - Path to a mean file:  Not specified
        - Offsets for a mean file:      Not specified
Model Optimizer version:        2019.1.1-83-g28dfbfd

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: D:/openvino/squeezenet1.1.xml
[ SUCCESS ] BIN file: D:/openvino/squeezenet1.1.bin
[ SUCCESS ] Total execution time: 2.64 seconds.

But still does not work for maskrcnn:

D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28> python D:\Programs\Intel\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\mo_tf.py   
 --data_type=FP32  
  --reverse_input_channels 
 --input_model frozen_inference_graph.pb 
 --tensorflow_object_detection_api_pipeline_config pipeline.config 
 --tensorflow_use_custom_operations_config mask_rcnn_support_api_v1.13.json
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb
        - Path for generated IR:        D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28\mask_rcnn_support_api_v1.13.json
Model Optimizer version:        2019.1.1-83-g28dfbfd
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. 
The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated 
with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (800, 800).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" 
(<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): 
Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. 
Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

Maybe i should give up on mask_rcnn, and try other models.

Thank you sincerely again. Best wish.

Jin

shubha-ramani commented 5 years ago

Dearest @ausk (Jin), Oh please don't give up on us ! In this case however, your error is distinctly different from your original error (actually it makes more sense). Is your model custom-trained ? Or did you just use a pre-trained one from the Tensorflow tar.gz ?

Note that you used mask_rcnn_support_api_v1.13.json but look what I used mask_rcnn_support.json . v.1.13 will work if you custom-trained your model, which I don't think you did (or you'd have mentioned this).

Please try again with mask_rcnn_support.json and let me know what happens.

Shubha

ausk commented 5 years ago

Dearest @shubha-ramani(Shubha) Thank you very much. I try to use mask_rcnn_support.json again, and OMG, it works!

Let's review the issue How to use mask_rcnn_demo in OpenVino:

(0) I install the OepnVino 2019.R1 on Win10 and Ubuntu 16.04, install all required python libraries, and down the mask_rcnn_inception_v2_coco_2018_01_28.tar.gz

(1) At first, I try to convert mask_rcnn using mask_rcnn_support.json、mask_rcnn_support_api_v1.7.json、mask_rcnn_support_api_v1.11.json, all failed. But I was not aware of the installed tensorflow is v1.13. The error info is

[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement"
 (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): 
The matched sub-graph contains network input node "image_tensor".

(2) Then at your suggest, I retry with mask_rcnn_support_api_v1.13.json from [tf_obj_det_jsons.zip](https://github.com/opencv/dldt/files/3311399/tf_obj_det_jsons.zip). It failed with info:

[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" 
(<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): 
Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. 
Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

But I was also not aware of the installed tensorflow is v1.13.

(3) Then I reinstall OpenVino、Python and Tensorflow (1.11, 1.12, 1.13), convert with mask_rcnn_support_api_v1.13.json. It failed with the same info as (2).


(4) Now I try Tensorflow 1.12 with mask_rcnn_support.json from OpenVino 2019.R1 to convert the pretrained model from mask_rcnn_inception_v2_coco_2018_01_28.tar.gz.

D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28> python D:\Programs\Intel\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\mo_tf.py   
 --data_type=FP32  
 --reverse_input_channels 
 --input_model frozen_inference_graph.pb 
 --tensorflow_object_detection_api_pipeline_config pipeline.config 
 --tensorflow_use_custom_operations_config mask_rcnn_support.json

And the result flag is SUCCESS!

[ SUCCESS ] Generated IR model.
...
[ SUCCESS ] Total execution time: 50.15 seconds.

Test the model:

D:\openvino\tfdetection\mask_rcnn_inception_v2_coco_2018_01_28>D:\openvino\build\intel64\Release\mask_rcnn_demo.exe -i "D:\openvino\car.png" -m "frozen_inference_graph.xml" -d CPU
InferenceEngine:
        API version ............ 1.6
        Build .................. 23780
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     D:\openvino\car.png
[ INFO ] Loading plugin

        API version ............ 1.6
        Build .................. 23780
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Network batch size is 1
[ INFO ] Prepare image D:\openvino\car.png
[ WARNING ] Image is resized from (787, 259) to (800, 800)
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Setting input data to the blobs
[ INFO ] Start inference (1 iterations)

Average running time of one iteration: 841.158 ms

[ INFO ] Processing output blobs
[ INFO ] Detected class 3 with probability 0.951828 from batch 0: [3.31388, 22.5477], [787, 249.706]
[ INFO ] Image out0.png created!
[ INFO ] Execution successful

My CPU is 4-core i5-7550@3.4GHz. Infering an image with the size of (800, 800) costs 841.158 ms (at the first time).

That is the result:

mask_rcnn_result


Dearest Shubha:

Thank you very much again! I should have tried more.

Have a nice day.

Your sincerely

Jin.

ausk commented 5 years ago

I Just don't know why it failed at the first time when I convert using with mask_rcnn_support.json.

Now I try tf 1.5, 1.12, 1.13.1, 1.14 with mask_rcnn_support.json from OpenVino 2019.R1 to convert the pretrained model from mask_rcnn_inception_v2_coco_2018_01_28.tar.gz, all work.

Thank you again!

shubha-ramani commented 5 years ago

Dear @ausk It may have failed because you used "" (quotes). I can't really file a bug on this because in our documentation we do not advertise that you should quote those parameters. I'm super glad that you got it working though and I'm glad that you did not give up on OpenVino !

Shubha