openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.25k stars 2.26k forks source link

accuracy_checker #13056

Closed largestcabbage closed 2 years ago

largestcabbage commented 2 years ago

How to use accary_checker to detect the accuracy of yolov5, why is there a problem with mine

models:
  - name: models_FP16
    launchers:
      - framework: openvino
        tags:
          - FP16
        model:   models_FP16.xml
        weights: models_FP16.bin
        adapter:
          type: yolo_v3
          anchors: "10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326"
          num: 9
          coords: 4
          classes: 2
          anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
          outputs:
            - output1
            - output2
            - output3
    datasets:
      - name: VOC2012_without_background
        annotation_conversion:
          converter: voc_detection
          annotations_dir: F:\\dataset\\objectdetection\\pen_bottle\\xml
          images_dir: F:\\dataset\\objectdetection\\pen_bottle\\image_split
          imageset_file: F:\dataset\objectdetection\pen_bottle\txt\\val.txt
          has_background: False
        data_source: F:\\dataset\\objectdetection\\pen_bottle\\image_split
        annotation: F:\\test\\objectdetection\\new_annotations\\imagenet.pickle
        dataset_meta: F:\\test\\objectdetection\\new_annotations\\imagenet.json

        preprocessing:
          - type: bgr_to_rgb
          - type: resize
            size: 640
            aspect_ratio_scale: fit_to_window

          - type: normalization
            mean: 255

        postprocessing:
          - type: resize_prediction_boxes
          - type: filter
            apply_to: prediction
            min_confidence: 0.001
            remove_filtered: True
          - type: diou_nms
            overlap: 0.35
          - type: clip_boxes
            apply_to: prediction
        metrics:
          - type: map
            integral: 11point
            ignore_difficult: true
            presenter: print_scalar

objectdetection.zip

zulkifli-halim commented 2 years ago

Hi @largestcabbage, I downloaded the yolov5s model from Ultralytics. Export the model to ONNX and IR format.

Then used the IR files with Accuracy Checker

The YML configuration file:

models:
  - name: yolov5s

    launchers:
      - framework: openvino
        device: CPU
        adapter:
          type: yolo_v5
          anchors: "10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326"
          num: 9
          coords: 4
          classes: 80
          anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
          outputs:
            - '326'
            - '378'
            - '430'

    datasets:
      - name: small
        data_source: "C:/Users/Downloads/small/val2017/"
        annotation_conversion:
          converter: mscoco_detection
          annotation_file: "C:/Users/Downloads/small/annotations/instances_val2017.json"
          images_dir: "C:/Users/Downloads/small/val2017/"

        preprocessing:
          - type: resize
            size: 640

        postprocessing:
          - type: resize_prediction_boxes
          - type: filter
            apply_to: prediction
            min_confidence: 0.001
            remove_filtered: True
          - type: nms
            overlap: 0.5
          - type: clip_boxes
            apply_to: prediction

        metrics:
          - type: map
            integral: 11point
            ignore_difficult: true
            presenter: print_scalar
          - type: coco_precision
            max_detections: 100
            threshold: 0.5

image

eaidova commented 2 years ago

@largestcabbage I'm correctly understand that you have yolo_v5 pretrained on own dataset stored in format like pascal voc? you have some mess in dataset section: annotation: F:\test\objectdetection\new_annotations\imagenet.pickle dataset_meta: F:\test\objectdetection\new_annotations\imagenet.json it is already existing files on your machine or used as names for saving? If existing and after some experiments with imagenet dataset, it can lead to problem because they are store annotation for another dataset which is not suitable for your model

solution provided by @zulkifli-halim correct for original model trained on mscoco except preprocessing section (looks like normalization is not applied to input during mo conversion that is why map score too low, you need convert model with mo using --scale 255 --reverse_input_channels or provide them as part of preprocessing as initial config file)

Regarding dataset, I want highlight that voc_converter has constant class mapping from original dataset, if your lables are different you need to provide dataset_meta_file to converter: dataset_meta_file - path to json file with a dataset meta (e.g. label_map, color_encoding).Optional, more details in Customizing dataset meta section.

largestcabbage commented 2 years ago

sorry, i tested it and it is still very low @eaidova

eaidova commented 2 years ago

sorry, i tested it and it is still very low @eaidova

@largestcabbage what exactly you tested? specify preprocessing? Are you sure that anchor values correct ordered?

probably should be:

          type: yolo_v5
          anchors: 10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326
          num: 3
          coords: 4
          classes: 80
          threshold: 0.001
          anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
          raw_output: True
          transpose: [0, 3, 1, 2]
          output_format: BHW
          cells: [80, 40, 20]
          outputs:
            - '326'
            - '378'
            - '430'
largestcabbage commented 2 years ago

The map has improved but is still lower than normal, is it because of the threshold setting problem?