Closed puitk-olp closed 2 years ago
Hello @puitk-olp,
Thank you for reaching OpenVINO! As you've already noticed IsFinite is not supported yet by ModelOptimizer but it is in our backlog.
Meanwhile you can try to prepare extension on your own to make it working. Pleas try following: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer.html#model-optimizer-extensions
Thank you!
@puitk-olp By the way as you see in the output --disable_nhwc_to_nchw
is optional now and may not be specified which will reduce your cmd line a bit.
Also input_shape can be left unspecified as well so you cmd might be even like:
mo --saved_model_dir=./ --log_level=ERROR --input=input0
@puitk-olp, I can suggest you a quick workaround to replace IsFinite
operation to IsNonEqual
with two inputs (data and Constant with floating-point infinite value) using front transformation. Of course, this transformation will work if you don't have nan values in the data. Example of the front transformation can be found here openvino\tools\mo\openvino\tools\mo\front\Log1p.py
.
Hello @puitk-olp,
Thank you for reaching OpenVINO! As you've already noticed IsFinite is not supported yet by ModelOptimizer but it is in our backlog.
Meanwhile you can try to prepare extension on your own to make it working. Pleas try following: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer.html#model-optimizer-extensions
Thank you!
Thank you (@andrei-kochin), for your all suggestions. I'll try to prepare my extension.
I'd like to ask one more thing, if the lack of support for the IsFinite
operation also implies the lack of support for Softmax
(IsFinite
is used by the TF function in calculation of Softmax
) for models defined in Keras
(then converted to saved_model format, before converting to IR)?
BR
@puitk-olp By the way as you see in the output
--disable_nhwc_to_nchw
is optional now and may not be specified which will reduce your cmd line a bit.Also input_shape can be left unspecified as well so you cmd might be even like:
mo --saved_model_dir=./ --log_level=ERROR --input=input0
When omitted --disable_nhwc_to_nchw
, I get the dimension mismatch for some layer during convertion to IR, so it's necessary in my case.
@puitk-olp, I can suggest you a quick workaround to replace
IsFinite
operation toIsNonEqual
with two inputs (data and Constant with floating-point infinite value) using front transformation. Of course, this transformation will work if you don't have nan values in the data. Example of the front transformation can be found hereopenvino\tools\mo\openvino\tools\mo\front\Log1p.py
.
@rkazants, in my case op IsFinite
is not directly used by me, but by layer Softmax
(TF uses IsFinite
in some function which calculates Softmax), so it will be difficult to replace it.
My model is originally defined in Keras, and I'm not sure what is introduced in it while saving H5 model to TF saved_model format (required by MO).
@puitk-olp starting from 2022.1 --disable_nhwc_to_nchw
is optional and should not make any difference to your graph. Does it really make a change to a graph?
Softmax should be supported according to FWK supported layers and should be represented as OpenVINO Softmax.
@andrei-kochin Softmax should be supported according to FWK supported layers and should be represented as OpenVINO Softmax.
I've probably gotten it. There are two Softmax
definitions in Keras
: keras.activations.softmax
and keras.layers.Softmax
and their implementations differ:
keras.activations.softmax
uses tf.nn.softmax()
and then inside: tf.exp()
and tf.reduce_sum()
keras.layers.Softmax
uses tf.exp()
and tf.reduce_logsumexp()
, and the last one function uses tensorflow.python.ops.is_finite()
- and that one is not supported in OV.Summarizing, I suspect that in OpenVINO, Softmax is supported as an activation function for other layer, and not as a separated layer. Could it be the case?
@puitk-olp, you are correct. Very often Keras operations are expressed as the decomposition of several TensorFlow operations and the softmax operation is a case.
In your case, you can work around this by using another softmax, this way is simper than implementing transformation for IsFinite
.
Hello @puitk-olp,
Do you have any news for us? have you succeed with workaround suggested?
Hello @andrei-kochin, I asked model's creators (I am only an user of the model) to use another softmax function in it, but I haven't received any feedback from them yet.
Closing this. Feel free to reopen and provide additional information or ask any questions related to this topic.
System information (version)
Detailed description
I work with custom Keras model with MultiHeadattention layers and I have been trying to convert it to IR with Model Optimizer. I get the Error in convertion that some operations coannot be converted (IsFinite). This operation is a part of ReduceLogSumExp op, and that one is part of Softmax op.
Softmax is listed as supported op on https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html however ReduceLogSumExp and IsFinite are not.
Softmax layer is widely used in different types of NN models and it's suprising to me, that convertion cannot be done (for TF2)
I checked Tensorflow source code, and it really uses functions: reduce_logsumexp and is_finite to calculate Softmax in all versions of TF2 (2.4.1 and above).
My questions are: 1) May the problem come from inproper convertion from h5 to saved_model format or this convertion changed (somehow) architecture of the model? Maybe I do something wrong. 2) Are there any ways to overcome the problem of unsupported IsFinite operation in OpenVINO (best with pure Python implementations) - in previous versions of OV it was possible to offload some operations and/or subgraph (with
--tensorflow_operation_patterns
or--tensorflow_subgraph_patterns
options of MO) now not? 3) Is it possible to add support of IsFinite op (for TF2) in future releases of OpenVINO?Steps to reproduce
Convertion from h5 to saved_model format:
Model's compilation with Model Optimizer:
I also tried to load model with custom objects configuration:
but Model Optimizer produced the same errors during convertion.
Issue submission checklist