This procedure carries out a conversion and an optimization to proceed with the inference.
Now, I would like to try the YoloV4 because it seems to be more effective for the purpose of the project. The problem is that OpenVINO Toolkit does not yet support this version and does not report the .json (file needed for optimization) file relative to version 4 but only up to version 3.
What has changed in terms of structure between version 3 and version 4 of the Yolo?
Can I hopefully hope that the conversion of the YoloV3-tiny (or YoloV3) is the same as the YoloV4?
Is the YoloV4 much slower than the YoloV3-tiny using only the CPU for inference?
When will the YoloV4-tiny be available?
Anyone have information about it?
Thanks in advance to anyone who gives me useful information.
I am currently working with the YoloV3-tiny.
To import the network into C++ project I use OpenVINO-Toolkit. In more detail I use the following procedure to convert the network: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html
This procedure carries out a conversion and an optimization to proceed with the inference.
Now, I would like to try the YoloV4 because it seems to be more effective for the purpose of the project. The problem is that OpenVINO Toolkit does not yet support this version and does not report the .json (file needed for optimization) file relative to version 4 but only up to version 3.
What has changed in terms of structure between version 3 and version 4 of the Yolo? Can I hopefully hope that the conversion of the YoloV3-tiny (or YoloV3) is the same as the YoloV4? Is the YoloV4 much slower than the YoloV3-tiny using only the CPU for inference? When will the YoloV4-tiny be available? Anyone have information about it?
Thanks in advance to anyone who gives me useful information.