Open Arham-Aalam opened 5 years ago
I tried. but got this: [ ERROR ] Shape [-1 -1 4] is not fully defined for output 0 of "input_anchors". Use --input_shape with positive integers to override model input shapes.
@yerzhik I found that Open Vino doesn't support custom layers which are implemented in this Mask RCNN implementation. I raised this issue in the hope of someone can crack it.
I am seeing the same error. Did either of you have any more luck with this?
@antithing not yet!!
@Arham-Aalam There are models provided by openvino which have the mask-rcnn model. Also it provides mask_rcnn_support.json and pipeline.config needed for conversion to using mo.py into xml/bin format.
@yerzhik Can you refer some links? As I know about Mask RCNN with vino it is only working with Model Zoo Implementation of Mask RCNN but I haven't found anything with Matterport's implementation. Thanks.
@Arham-Aalam Here is the link https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html
Maybe its what you were talking about. Tell me if you can run mask rcnn using those models.
I tried --tensorflow_use_custom_operations_config extensions\front\tf\mask_rcnn_support_api_v1.11.json but it didn't work. my guess is that json file only support tensorflow object recognition API. Have anyone crack it?
Hello to all. Do you have any news regarding the conversion of this RCNN Mask model to OpenVino? I have just finished training on my dataset and have to convert to switch to NCSK Movidius. Since this is a thesis project for me, I am interested in solving the problem as soon as possible. Model inspection as described on the Intel website:
python3 /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/utils/summarize_graph.py --input_model logs/freeze_model.pb
3 input(s) detected:
Name: input_image, type: float32, shape: (-1,-1,-1,3)
Name: input_image_meta, type: float32, shape: (-1,16)
Name: input_anchors, type: float32, shape: (-1,-1,4)
7 output(s) detected:
output_detections
output_mrcnn_class
output_mrcnn_bbox
output_mrcnn_mask
output_rois
output_rpn_class
output_rpn_bbox
Conversion test as described on the Intel site:
python3 /opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo_tf.py --input_model logs/freeze_model.pb --input_shape "(1,512,512,3)" --input input_image --data_type FP16 --tensorboard_logdir logs/landing20191105T1206/events.out.tfevents.1572952039.hpc-g01-node01.unitn.it
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /Users/francesco/PycharmProjects/ProjectThesis/logs/freeze_model.pb
- Path for generated IR: /Users/francesco/PycharmProjects/ProjectThesis/.
- IR output name: freeze_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: input_image
- Output layers: Not specified, inherited from the model
- Input shapes: (1,512,512,3)
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: /Users/francesco/PycharmProjects/ProjectThesis/logs/landing20191105T1206/events.out.tfevents.1572952039.hpc-g01-node01.unitn.it
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.3.0-375-g332562022
Writing an event file for the tensorboard...
Done writing an event file.
[ ERROR ] --input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #27.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.input_cut.InputCut'>): --input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #27.
Hello all,
After experiencing the same issues with Matterport's implementation, I tried running the model in inference mode, which gave me actual input shapes rather than the '-1' counterpart, which might be related to my images' dimensions, but for the sake of experiment/expediency I tried.
The shapes I had, for 512x512 images, were the following: input_str = '"input_image[1 512 512 3],input_image_meta[1 18],input_anchors[1 65520 4]"' Now, these shapes are specific to my training, but running an inference on your model should give you something similar, I would assume.
Running this into the model optimizer, I experienced the following error:
!python {mo_tf_path} --input_model {pb_file} --output_dir {output_dir} --data_type FP16 --input {input_str}
[ ERROR ] Cannot infer shapes or values for node "roi_align_classifier/Where_3". [ ERROR ] Input 0 of node roi_align_classifier/Where_3 was passed int32 from roi_align_classifier/Equal_3_port_0_ie_placeholder:0 incompatible with expected bool. [ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x000001EE20702950>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Input 0 of node roi_align_classifier/Where_3 was passed int32 from roi_align_classifier/Equal_3_port_0_ie_placeholder:0 incompatible with expected bool. Stopped shape/value propagation at "roi_align_classifier/Where_3" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Input 0 of node roi_align_classifier/Where_3 was passed int32 from roi_align_classifier/Equal_3_port_0_ie_placeholder:0 incompatible with expected bool. Stopped shape/value propagation at "roi_align_classifier/Where_3" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
I was unable to get beyond that thus far. I would appreciate any further hints.
Anyone got any further with the conversion?
Anyone got any further with the conversion?
Do you success?
I have same problem.
Please try https://docs.openvinotoolkit.org/latest/omz_demos_mask_rcnn_demo_cpp.html. Does it help?
Updated: it doesn't help, it is intended for a different mask rcnn implementation.
I want to use intel OpenVino SDK for fast computation on CPU. Does anyone tried this: https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow