Closed kthui closed 2 years ago
Added patch into setting batch size:
$ git diff 7dd68dcf124ec43f2ad1e3ede3d8f88a59c3f6ef 5c5c64728635c80e0d2fe96d54ae48e4ae0097ce diff --git a/src/openvino.cc b/src/openvino.cc index e2dcc28..0b6ee27 100644 --- a/src/openvino.cc +++ b/src/openvino.cc @@ -552,8 +552,10 @@ ModelState::ValidateInputs(const size_t expected_input_cnt) ov_model_, ppp.build(), "apply model input preprocessing"); // Configuring the model to handle the max_batch_size - RETURN_IF_OPENVINO_ERROR( - ov::set_batch(ov_model_, MaxBatchSize()), "setting max batch size"); + if (MaxBatchSize()) { + RETURN_IF_OPENVINO_ERROR( + ov::set_batch(ov_model_, MaxBatchSize()), "setting max batch size"); + } return nullptr; // success }
This change is force updated into the "Upgrade to api 2.0" commit. Will use new commits moving forward.
Change the documentation language to match more closely with intel's description of the parameters.
Added patch into setting batch size:
This change is force updated into the "Upgrade to api 2.0" commit. Will use new commits moving forward.