Closed DuckJ closed 4 years ago
Model Optimizer version is 2020.1.0-61-gd349c3ba4a
Hi @DuckJ,
Thanks for reaching out! It looks like you are using an older release of the OpenVINO toolkit. Could you try the latest release (2020.4) and see if the issue is still present? Also, which target device are you using?
Regards, Jesus
@jgespino Thanks for your reply, my target device is linux Red Hat 4.8.5, I used gcc 4.8.5 to build my project.
@jgespino I tried the latest release (2020.4) version, the error is
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Incorrect shape of state ports for layer 891/Split/forward/LSTMCell_sequence
the error in 2020.1 is
Incorrect shape of state ports for layer 891/Split/reverse/LSTMCell_sequence
@DuckJ Let me check with my peers and get back to you.
Regards, Jesus
@DuckJ hi! Can you send out your onnx and IR models, please?
@dmitryte I am sorry, the model saved in company server, I don't have permission to export.
Reference https://github.com/openvinotoolkit/openvino/issues/436,the model in crnn seems that there are similar problems. the pytorch model can be converted into onnx via
torch.onnx.export
model load code can refer to this
then via
python mo.py --framework onnx --input_modell ***.onnx --input_shape [1,1,32,100], --mean_values [127.5] --scale_values [127.5] --model_name **** --output_dir *****
Hi @DuckJ !
I converted the model to onnx with the command you provided and then used MO with the shape 2 1 32 100. Inference works fine in 2020.4.
Hi, @dmitryte . When used MO with the shape 2 1 32 100, the inference can work with batchszie 2, but if you change the batchsize into 3 or others, is everything ok?
@DuckJ I experience same issues with runtime reshape.
As for the recommendations in the last comment here #436, you can generate several IRs and choose them manually inside the script according to your inputs. That would be the best workaround for this case right now.
@dmitryte Thanks, got it
@DuckJ, I'm closing this ticket for now. If I find out how to support the reshape without any hacks, I'll reopen and provide the update.
I used the pytorch code crnn and the cnn backbone change as mobilenet v3 large 1.0, to train a ocr recognition model, the input size is set (1,1,,32,100), and convert into onnx model successfully, although there is a warning about lstm batchsize. I think it is caused by the following source code `class BidirectionalLSTM(nn.Module):
I also convert the onnx model into openvino xml and bin successfully, when the batchsize set 1, the inference is normal. But when I try to realize the batchszie=2 , can't load model to device, error is
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Incorrect shape of state ports for layer 891/Split/reverse/LSTMCell_sequence
I guess it should be caused by LSTM batch first. Anyone can help me? Thanks.