ultralytics / yolo-flutter-app

A Flutter plugin for Ultralytics YOLO computer vision models
https://ultralytics.com
GNU Affero General Public License v3.0
78 stars 32 forks source link

How to Integrate Yolo-OBB model into this application? #44

Open muhammad-qasim-cowlar opened 2 months ago

muhammad-qasim-cowlar commented 2 months ago

Currently the application uses the x,y,width and height detected by the yolo model to draw bounding boxes. What would I need to do to integrate YOLOv8-OBB into the existing application as the OBB model can draw polygons and not just rectangles and it would be helpful for my usecase.

pderrenger commented 2 months ago

@muhammad-qasim-cowlar hello!

Integrating the YOLOv8-OBB model into your application to leverage oriented bounding boxes (OBBs) is a great idea, especially if your use case benefits from more precise object localization. Here are the steps you can follow to make this integration:

  1. Update to the Latest Version: Ensure you are using the latest version of the Ultralytics YOLO package to access the most recent features and bug fixes.

  2. Train or Load a YOLOv8-OBB Model: If you haven't already, you can train a YOLOv8-OBB model or load a pre-trained one. Here's a quick example of how to train a model:

    from ultralytics import YOLO
    
    # Create a new YOLOv8n-OBB model from scratch
    model = YOLO("yolov8n-obb.yaml")
    
    # Train the model on your dataset
    results = model.train(data="your_dataset.yaml", epochs=100, imgsz=640)
  3. Modify Your Application to Handle OBBs: Since OBBs are represented by four corner points, you will need to adjust your application to handle these points instead of the traditional x, y, width, height format. The YOLO OBB format provides bounding boxes as x1, y1, x2, y2, x3, y3, x4, y4.

    Here’s an example of how you might modify your drawing function to handle OBBs:

    import cv2
    
    def draw_obb(image, obb):
        # obb is a list of 8 values: [x1, y1, x2, y2, x3, y3, x4, y4]
        points = [(obb[i], obb[i+1]) for i in range(0, len(obb), 2)]
        points = np.array(points, dtype=np.int32)
        cv2.polylines(image, [points], isClosed=True, color=(0, 255, 0), thickness=2)
    
    # Example usage
    image = cv2.imread("path_to_image.jpg")
    obb = [0.780811, 0.743961, 0.782371, 0.74686, 0.777691, 0.752174, 0.776131, 0.749758]
    draw_obb(image, obb)
    cv2.imshow("OBB", image)
    cv2.waitKey(0)
  4. Adjust Post-Processing: Ensure your post-processing pipeline can handle the OBB format. This might involve updating any code that processes detection outputs to work with the four corner points.

  5. Testing and Validation: Thoroughly test the integration to ensure that the OBBs are being drawn correctly and that the application behaves as expected with the new bounding box format.

By following these steps, you should be able to integrate the YOLOv8-OBB model into your application successfully. If you encounter any issues or have further questions, feel free to ask!

Best of luck with your integration! 😊


For more detailed information on OBB datasets and training, you can refer to the Ultralytics documentation.