openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.22k stars 2.25k forks source link

BiLSTM convert into openvino model, when batchsize > 1, can't load model to device #1911

Closed DuckJ closed 4 years ago

DuckJ commented 4 years ago

I used the pytorch code crnn and the cnn backbone change as mobilenet v3 large 1.0, to train a ocr recognition model, the input size is set (1,1,,32,100), and convert into onnx model successfully, although there is a warning about lstm batchsize. I think it is caused by the following source code `class BidirectionalLSTM(nn.Module):

def __init__(self, nIn, nHidden, nOut):
    super(BidirectionalLSTM, self).__init__()

    self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True)
    self.embedding = nn.Linear(nHidden * 2, nOut)

def forward(self, input):
    recurrent, _ = self.rnn(input)
    T, b, h = recurrent.size()
    t_rec = recurrent.view(T * b, h)

    output = self.embedding(t_rec)  # [T * b, nOut]
    output = output.view(T, b, -1)

    return output`

I also convert the onnx model into openvino xml and bin successfully, when the batchsize set 1, the inference is normal. But when I try to realize the batchszie=2 , can't load model to device, error is terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Incorrect shape of state ports for layer 891/Split/reverse/LSTMCell_sequence I guess it should be caused by LSTM batch first. Anyone can help me? Thanks.

DuckJ commented 4 years ago

Model Optimizer version is 2020.1.0-61-gd349c3ba4a

jgespino commented 4 years ago

Hi @DuckJ,

Thanks for reaching out! It looks like you are using an older release of the OpenVINO toolkit. Could you try the latest release (2020.4) and see if the issue is still present? Also, which target device are you using?

Regards, Jesus

DuckJ commented 4 years ago

@jgespino Thanks for your reply, my target device is linux Red Hat 4.8.5, I used gcc 4.8.5 to build my project.

DuckJ commented 4 years ago

@jgespino I tried the latest release (2020.4) version, the error is terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Incorrect shape of state ports for layer 891/Split/forward/LSTMCell_sequence the error in 2020.1 is Incorrect shape of state ports for layer 891/Split/reverse/LSTMCell_sequence

jgespino commented 4 years ago

@DuckJ Let me check with my peers and get back to you.

Regards, Jesus

dmitryte commented 4 years ago

@DuckJ hi! Can you send out your onnx and IR models, please?

DuckJ commented 4 years ago

@dmitryte I am sorry, the model saved in company server, I don't have permission to export. Reference https://github.com/openvinotoolkit/openvino/issues/436,the model in crnn seems that there are similar problems. the pytorch model can be converted into onnx via torch.onnx.export model load code can refer to this then via python mo.py --framework onnx --input_modell ***.onnx --input_shape [1,1,32,100], --mean_values [127.5] --scale_values [127.5] --model_name **** --output_dir *****

dmitryte commented 4 years ago

Hi @DuckJ !

I converted the model to onnx with the command you provided and then used MO with the shape 2 1 32 100. Inference works fine in 2020.4.

DuckJ commented 4 years ago

Hi, @dmitryte . When used MO with the shape 2 1 32 100, the inference can work with batchszie 2, but if you change the batchsize into 3 or others, is everything ok?

DuckJ commented 4 years ago

@dmitryte https://github.com/openvinotoolkit/openvino/issues/436

dmitryte commented 4 years ago

@DuckJ I experience same issues with runtime reshape.

As for the recommendations in the last comment here #436, you can generate several IRs and choose them manually inside the script according to your inputs. That would be the best workaround for this case right now.

DuckJ commented 4 years ago

@dmitryte Thanks, got it

dmitryte commented 4 years ago

@DuckJ, I'm closing this ticket for now. If I find out how to support the reshape without any hacks, I'll reopen and provide the update.