ultralytics / ultralytics

NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
28.08k stars 5.58k forks source link

How to calculate mAP for 2 staged method ( Detection and classfifcation) ? #10085

Closed deepukr007 closed 3 months ago

deepukr007 commented 4 months ago

Search before asking

Question

Hi, I am working on small object detection and to make the whole process robust. I am first training Yolo for detecting all the classes (different types of birds ) as a single class ( bird ). I am also training a classification model for classifying the different birds given the cropped images of birds from the main image.

Now since I have 2 models ready , While inferencing I will first run a detection model , get boxes , crop the objects and pass it through the classifier and relabel the image.

I am getting mAP for detection model and Validation accuracy for classfication model. Now I want to run this above mentioned pipeline and give the actual mAP, what should be my approach ?

Thanks in advance

Additional

No response

glenn-jocher commented 4 months ago

Hello there! 👋

Calculating the mAP for your two-stage method involves evaluating how well your detection model identifies objects (birds in your case) and how accurately your classification model classifies these detected objects into their respective bird classes.

A simplified approach would be as follows:

  1. For a set of test images, run your detection model to identify bounding boxes for birds.
  2. Crop these detected areas and pass them through your classification model to label each bird.
  3. Compile these predictions, including the bounding box coordinates, confidence scores, and class labels, into a single prediction file, aligning the format with your ground truth data (which should include true bounding box coordinates and true labels for each bird in the test set).
  4. Use a tool or script that calculates mAP, providing it with your compiled predictions and ground truth. The tool will compare the predicted labels and bounding box coordinates against the true labels and coordinates to compute the mAP.

Both steps (detection and classification) are crucial; mispredictions in either step affect the final accuracy.

Here's a pseudo-code snippet to illustrate the evaluation process:

# Assuming 'detector' is your trained detection model and 'classifier' is your trained classification model.
for image in test_images:
    # Step 1: Detect objects
    detections = detector.detect(image)

    for box in detections:
        # Step 2: Crop and classify
        crop = image.crop(box.coordinates)
        label = classifier.predict(crop)

        # Compile your prediction: [image_id, x_min, y_min, x_max, y_max, confidence, predicted_label]
        predictions.append([image_id] + box.coordinates + [box.confidence] + [label])

# Step 3: Prepare your ground truth in a similar format for comparison

# Step 4: Use a mAP calculation tool/script, passing the 'predictions' and ground truth
mAP = calculate_mAP(predictions, ground_truth)

Make sure your predictions and ground truth data are correctly formatted for the mAP calculation tool you choose. There are several open-source tools and libraries available for calculating mAP, such as COCO API or Pycocotools.

Hope this helps! Let us know if you have further questions. 🦜

github-actions[bot] commented 3 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐