dkurt / openvino_efficientdet

EfficientDet with Intel OpenVINO
https://github.com/openvinotoolkit/openvino
Apache License 2.0
12 stars 5 forks source link

Error Model Optimizer #8

Closed ghost closed 3 years ago

ghost commented 3 years ago

Hi, i have an issue when i launch mo.py, this is the error

mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "TensorArrayV2" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

This is the run code.

python mo.py --input_model /Users/antoniosambataro/Desktop/frozen/d4/efficientdet-d4_frozen.pb --transformations_config /Users/antoniosambataro/Desktop/automl_efficientdet.json --input_shape "[1, 1024, 1024, 3]" -o /Users/antoniosambataro/Desktop/ --log_level=DEBUG

Im using ubuntu 18.04 and python3.7 and i try 3.6 too. Can you help me plz? Also i tested the d6 version but same error

Thanks

dkurt commented 3 years ago

Hi! Can you please specify which version of TensorFlow do you have?

python -c "import tensorflow as tf; print(tf.__version__)"

Also, try again master branch of OpenVINO to be sure that automl_efficientdet.json is up-to-date.

ghost commented 3 years ago

Hi, i have Tensorflow 2.3.1 and yes, this is up to date "automl_efficientdet.json"

thx

dkurt commented 3 years ago

Well, we tested with 2.1.0 and 2.3.0. It might be an issue. Can you please share a model?

ghost commented 3 years ago

I am currently using checkpoints take from here https://github.com/google/automl/tree/master/efficientdet Attached instead the models on this mega link, later I try the version 2.3.0 of Tensorflow https://mega.nz/file/mFhx2SQZ#L4DjWeZVrV5ukltwRWXDzAQ3GiU7kL50EmbZq8Qt1CY

dkurt commented 3 years ago

Hi! Sorry for delay.

Tried the latest master branch of OpenVINO (https://github.com/openvinotoolkit/openvino/commit/20df6eada6744b254798c046380a98fcc3bb6a87). Both d4 and d6 models are converted. TensorFlow 2.0.0 is installed.

python3 model-optimizer/mo.py \
  --input_model d6/efficientdet-d6_frozen.pb \
  --input_shape "[1, 1280, 1280, 3]" \
  --transformations_config model-optimizer/extensions/front/tf/automl_efficientdet.json
xiaoweiChen commented 3 years ago

hi @dkurt, thanks for your great work. And I meet the same issue.

With my test, I think the python version(3.x is OK) and tensorflow2 version is not the point of this issue.

I test the model on mega by @samba45, because my winodws PC can not frozen the model, the linux PC can frozen model on my workplace...

Well, the test continue...

I see you last comment, and try to do this convert. I install the offical OpenVINO package l_openvino_toolkit_p_2021.1.110.tgz. Use the converter from this package and get the result like below:

xiaowei@xiaowei-1125:/mnt/e/openSourceProjecrts/efficient-test/model$ mo.py --input_model d4/efficientdet-d4_frozen.pb  --input_shape "[1,1024,1024,3]" --transformations_c
onfig automl_efficientdet.json
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /mnt/e/openSourceProjecrts/efficient-test/model/d4/efficientdet-d4_frozen.pb
        - Path for generated IR:        /mnt/e/openSourceProjecrts/efficient-test/model/.
        - IR output name:       efficientdet-d4_frozen
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,1024,1024,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  None
Model Optimizer version:        2021.1.0-1237-bece22ac675-releases/2021/1
[ ERROR ]  Cannot infer shapes or values for node "TensorArrayV2".
[ ERROR ]  Tensorflow type 21 not convertible to numpy dtype.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fc5c5083400>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "TensorArrayV2" node.
 For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

The same screenprint with samba45.

En, the automl_efficientdet.json is latest.

Clone the openvino repo with the latest automl_efficientdet.json, the json is same.

openvino git repo info:

xiaowei@xiaowei-1125:/mnt/e/openSourceProjecrts/openvino$ git log -n 1
commit 27c97a037f84e764c5570060aa8f60a68aca3579 (HEAD -> master, origin/master, origin/HEAD)
Author: Maxim Kurin <maxim.kurin@intel.com>
Date:   Wed Nov 11 17:40:37 2020 +0300

    [IE][VPU]: Optimize swish layer and remove swish replacement pass (#2993)

    * Swish layer optimization
    * Update VPU firmware 1468

Then, I try the git repo converter use the same input parameters. like:

xiaowei@xiaowei-1125:/mnt/e/openSourceProjecrts/efficient-test/model$ python3 /mnt/e/openSourceProjecrts/openvino/model-optimizer/mo.py --input_model d4/efficientdet-d4_fr
ozen.pb  --input_shape "[1,1024,1024,3]" --transformations_config /mnt/e/openSourceProjecrts/openvino/model-optimizer/extensions/front/tf/automl_efficientdet.json
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /mnt/e/openSourceProjecrts/efficient-test/model/d4/efficientdet-d4_frozen.pb
        - Path for generated IR:        /mnt/e/openSourceProjecrts/efficient-test/model/.
        - IR output name:       efficientdet-d4_frozen
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,1024,1024,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  None
Model Optimizer version:        unknown version

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /mnt/e/openSourceProjecrts/efficient-test/model/./efficientdet-d4_frozen.xml
[ SUCCESS ] BIN file: /mnt/e/openSourceProjecrts/efficient-test/model/./efficientdet-d4_frozen.bin
[ SUCCESS ] Total execution time: 119.66 seconds.
[ SUCCESS ] Memory consumed: 691 MB.

WoW! And try d6

xiaowei@xiaowei-1125:/mnt/e/openSourceProjecrts/efficient-test/model$ python3 /mnt/e/openSourceProjecrts/openvino/model-optimizer/mo.py --input_model d6/efficientdet-d6_frozen.pb  --input_shape "[1,1280,1280,3]" --transformations_config /mnt/e/openSourceProjecrts/openvino/model-optimizer/extensions/front/tf/automl_efficientdet.json
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /mnt/e/openSourceProjecrts/efficient-test/model/d6/efficientdet-d6_frozen.pb
        - Path for generated IR:        /mnt/e/openSourceProjecrts/efficient-test/model/.
        - IR output name:       efficientdet-d6_frozen
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,1280,1280,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  None
Model Optimizer version:        unknown version

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /mnt/e/openSourceProjecrts/efficient-test/model/./efficientdet-d6_frozen.xml
[ SUCCESS ] BIN file: /mnt/e/openSourceProjecrts/efficient-test/model/./efficientdet-d6_frozen.bin
[ SUCCESS ] Total execution time: 154.93 seconds.
[ SUCCESS ] Memory consumed: 1371 MB.

Prefect!

I try the tensorflow-cpu 2.0.0, 2.3.1, 2.4.0rc, and python 3.6.9, 3.8.5, 3.8.3, no effect for openvino model-optimizer.

I will try the script for forward, hopeful I can get the right result on OpenVINO.

dkurt commented 3 years ago

@xiaoweiChen, OpenVINO from GitHub should be used for conversion, that's right. Then resulting model can be executed with already released binaries.

ghost commented 3 years ago

Hi everyone, I tried with the openvino repo and everything works perfectly! Can I ask what performance you have in fps with the d6 model? thanks!!

dkurt commented 3 years ago

@samba45, you may try to benchmark it on Intel DevCloud