openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.25k stars 2.26k forks source link

[Error] Cannot infer shapes or values for node "StatefulPartitionedCall/map/TensorArrayV2_2" #4305

Closed waghts95 closed 3 years ago

waghts95 commented 3 years ago
System information (version)

First, thank you very much openvino team for your amazing work.

I have trained efficientdet-d0 (tensorflow) model. I am trying to convert it to IR. My command is : python mo_tf.py --saved_model_dir D:\rough\ed0\content\fine_tuned_model\saved_model

This is I am getting

Model Optimizer arguments: Common parameters:

2021-02-12 11:29:53.239524: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-02-12 11:29:53.239644: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 2021-02-12 11:29:53.244203: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-02-12 11:29:53.246808: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes) [ ERROR ] Cannot infer shapes or values for node "StatefulPartitionedCall/map/TensorArrayV2_2". [ ERROR ] Tensorflow type 21 not convertible to numpy dtype. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x000002BB00584E58>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/map/TensorArrayV2_2" node. For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

avitial commented 3 years ago

@waghts95 thanks for reaching out. Try specifying the _--inputshape in your MO command and see if that makes any difference, have you tried freezing the model prior to using the Model Optimizer? If possible it would be great if you can share the model files to try and see what could be the issue here.

Just FYI, as per efficientdet-d0-tf model included in Open Model Zoo these are the MO arguments used:

--input_shape=[1,512,512,3] \
--input=image_arrays \
--reverse_input_channels \
--input_model=$conv_dir/efficientdet-d0_saved_model/efficientdet-d0_frozen.pb \
--transformations_config=$mo_dir/extensions/front/tf/automl_efficientdet.json

~Luis

waghts95 commented 3 years ago

@avitial, thank you very much for update. Even specifying the --input_shape in your MO command did not help. I have shared model files which I trained on my dataset. https://drive.google.com/drive/folders/1bFpaK5KAZaIuDoJV143rP9HxcdYQT2Q9?usp=sharing

avitial commented 3 years ago

@waghts95 I am able to see the error and I think the first error is coming from tensorflow.python.framework.errors_impl.InternalError: Tensorflow type 21 not convertible to numpy dtype, which I'm not entirely sure how to resolve.

We have a tutorial available on how to convert EfficientDet models from AutoML repo. The tutorial makes use of EfficientDet-D4, but conversion steps should be similar for EfficientDet-D0. Which implementation of EfficientDet are you using?

~Luis

waghts95 commented 3 years ago

Thank you very much for reply. I tried that tutorial, even that is not working. I think error was 'Can't load save_path when it is None'. Same error in the another issue you have mentioned here above.

avitial commented 3 years ago

Got it, are you carefully following the steps listed? The Can't load save_path when it is None error is coming from freezing the model step, is that correct? Is your custom trained model based on EfficientDet implementation in tutorial?

The model checkpoint efficientdet-d0.tar.gz referenced in the "Pretrained EfficientDet Checkpoints" has the contents below, which differ from the model files you've provided. When I attempt to freeze your model, I get a similar error as you reported in the other issue.

efficientdet-d0
├── checkpoint
├── d0_coco_test-dev2017.txt
├── d0_coco_val.txt
├── model.data-00000-of-00001
├── model.index
└── model.meta
waghts95 commented 3 years ago

You said files are different but they works correctly when we do predictions in tensorflow. Shall I provide you google colab code file where these files are generated?

waghts95 commented 3 years ago

I am using efficientdet-d0. I am sharing colab code with you (luis.e.avitia@intel.com and lavitia9@gmail.com) that I used for training this model. https://colab.research.google.com/drive/1wehP9A2QzLk6PgD1DUAW-zwCWTKk6N_S#scrollTo=ypWGYdPlLRUN

waghts95 commented 3 years ago

Please reply.

avitial commented 3 years ago

@waghts95 sorry for the delay in my response, it looks like this implementation of EfficientDet has some SSD architecture (SSD with EfficientNet-b4 + BiFPN). That is the reason why the steps in tutorial do not really apply to your scenario. Tutorial uses EfficientDet implementation from AutoML repo.

I don't believe this model architecture is currently supported and may need to figure out how to load this non-frozen model as mentioned in the MO Developer Guide.

~Luis

avitial commented 3 years ago

@waghts95 I am currently checking with dev team for additional input, I will share any details as soon as I hear back.

BR, ~Luis

Ref. 50525

waghts95 commented 3 years ago

@avitial, Did you receive any information regarding error from development team ?

lazarevevgeny commented 3 years ago

@waghts95, take the Model Optimizer from https://github.com/openvinotoolkit/openvino/pull/4772 and try to convert the model using the following command line:

./mo.py --saved_model_dir <model_dir>/saved_model/ --transformations extensions/front/tf/ssd_support_api_v2.4.json --tensorflow_object_detection_api <model_dir>/pipeline.config --reverse_input_channels --scale 127.5 --mean_values [127.5,127.5,127.5]

waghts95 commented 3 years ago

Okay

On Mon, Mar 15, 2021, 14:59 Evgeny Lazarev @.***> wrote:

@waghts95 https://github.com/waghts95, take the Model Optimizer from

4772 https://github.com/openvinotoolkit/openvino/pull/4772 and try to

convert the model using the following command line:

./mo.py --saved_model_dir /saved_model/ --transformations extensions/front/tf/ssd_support_api_v2.4.json --tensorflow_object_detection_api /pipeline.config --reverse_input_channels --scale 127.5 --mean_values [127.5,127.5,127.5]

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openvinotoolkit/openvino/issues/4305#issuecomment-799264391, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE3ISNJBCE2WC53SVGS5C7DTDXHRHANCNFSM4XQEC2XQ .

waghts95 commented 3 years ago

@lazarevevgeny, Thank you. This worked.

avitial commented 3 years ago

PR has been merged to 2021.4 master branch with hash commit 522ad39, closing issue.

shashank332 commented 2 years ago

How to take model optimizer from issue #4772