david8862 / keras-YOLOv3-model-set

end-to-end YOLOv4/v3/v2 object detection pipeline, implemented on tf.keras with different technologies
MIT License
639 stars 220 forks source link

Conversion to openvino #194

Open ZiyueWangUoB opened 3 years ago

ZiyueWangUoB commented 3 years ago

Is there any official method to convert custom trained YOLO models to openvino? I've noticed that, if training and then directly converting to openvino there are many errors in the detection (random bounding boxes, very larger bounding boxes e.t.c.).

Converting using the Openvino Docs: https://docs.openvinotoolkit.org/2021.2/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html suggests that a "transformations config" file is needed for model conversion. However, this doesn't work with the custom trained "tiny-yolo3-darknet" model as the layer names are different. I.E. the config file looks for the node "detector/yolo-v3/Reshape", while there is a different implementation for this layer in this repo.

Has anyone succeeded in successfully converting?

carlosbravo1408 commented 2 years ago

Maybe you need to know what is your output layers, I've got the same problem with the conversion from yolo3_mobilenet_lite.

This works for me:

  1. Check the outputs layers for your custom trained model with:

    cd /opt/intel/openvino_2021.x.xxx/deployment_tools/model_optimizer/
    cd mo/utils
    sudo python3 summarize_graph.py --input_model /path/to/model_trained.pb

    The output will be similar as:

    1 input(s) detected:
    Name: image_input, type: float32, shape: (None,416,416,3)
    3 output(s) detected:
    conv2d_3/BiasAdd 
    conv2d_8/BiasAdd
    conv2d_13/BiasAdd

    maybe the outputs can vary.

  2. create a JSON file:

    [
       {
           "id": "TFYOLOV3",
           "match_kind": "general",
           "custom_attributes": {
               "classes": <Number of classes of your model>,
               "anchors": [your customs anchors],
            "coords": 4,
            "num": 9,
            "masks":[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
            "entry_points": ["conv2d_3/BiasAdd", "conv2d_8/BiasAdd", "conv2d_13/BiasAdd"]
           }
    }
    ]

    the entry points should be the same as the output layers as the step 1

  3. run the Model Optimizer:

    python3 $INSTALL_DIR/mo_tf.py --input_model <input/model.pb> --input_shape [2,416,416,3] --scale_values=image_input[255] --reverse_input_channels --data_type FP16 --tensorflow_use_custom_operations_config <json/file> --output_dir <OUTPUT/DIR>
lakshin commented 1 year ago

@carlosbravo1408 Thank you very much your first two steps worked perfectly. However, the third step was missing some options. If someone else is struggling to convert the model to openvino IR format use the below command. Note the (--input=image_input'--scale_values=image_input[255]')

The h5 files needs to be converted to frozen pb format using tools/model_converter/keras_to_tensorflow prior to running the below command.

python3 -- /opt/intel/openvino_2021.x.xxx/deployment_tools/model_optimizer/mo.py --framework=tf --data_type=FP16 --output_dir=/home/jesus/cases/1593-public-yolov3-wrng-predictions/omz/public/yolo-v3-tf/FP16 --model_name=yolo-v3-tf '--input_shape=[1,416,416,3]' --input=image_input'--scale_values=image_input[255]' --reverse_input_channels --transformations_config=/home/jesus/1593/omz/public/yolo-v3-tf/yolo-v3.json --input_model=/home/jesus/1593/omz/public/yolo-v3-tf/yolo-v3.pb

Refer: https://github.com/openvinotoolkit/openvino/issues/1593

Sometimes --layout "NHWC->NCHW" will also have to be specified.

Refer: https://github.com/luxonis/depthai/issues/784

If you want to further run it on an oak device you could upload the xml and bin files to the below url and get the blob.

https://blobconverter.luxonis.com/

Happy training!!