mgonzs13 / yolo_ros

Ultralytics YOLOv8, YOLOv9, YOLOv10, YOLOv11 for ROS 2
GNU General Public License v3.0
336 stars 89 forks source link
3d-human-pose-estimation 3d-object-detection human-pose-estimation instance-segmentation obb object-detection object-tracking oriented-bounding-box ros2 ultralytics yolo-nas yolov10 yolov11 yolov8 yolov9

yolo_ros

ROS 2 wrap for YOLO models from Ultralytics to perform object detection and tracking, instance segmentation, human pose estimation and Oriented Bounding Box (OBB). There are also 3D versions of object detection, including instance segmentation, and human pose estimation based on depth images.

Table of Contents

  1. Installation
  2. Docker
  3. Models
  4. Usage
  5. Demos

Installation

$ cd ~/ros2_ws/src
$ git clone https://github.com/mgonzs13/yolo_ros.git
$ pip3 install -r yolo_ros/requirements.txt
$ cd ~/ros2_ws
$ rosdep install --from-paths src --ignore-src -r -y
$ colcon build

Docker

Build the yolo_ros docker.

$ docker build -t yolo_ros .

Run the docker container. If you want to use CUDA, you have to install the NVIDIA Container Tollkit and add --gpus all.

$ docker run -it --rm --gpus all yolo_ros

Models

The compatible models for yolo_ros are the following:

Usage

Click to expand ### YOLOv5 ```shell $ ros2 launch yolo_bringup yolov5.launch.py ``` ### YOLOv8 ```shell $ ros2 launch yolo_bringup yolov8.launch.py ``` ### YOLOv9 ```shell $ ros2 launch yolo_bringup yolov9.launch.py ``` ### YOLOv10 ```shell $ ros2 launch yolo_bringup yolov10.launch.py ``` ### YOLOv11 ```shell $ ros2 launch yolo_bringup yolov11.launch.py ``` ### YOLO-NAS ```shell $ ros2 launch yolo_bringup yolo-nas.launch.py ``` ### YOLO-World ```shell $ ros2 launch yolo_bringup yolo-world.launch.py ```

Topics

Parameters

These are the parameters from the yolo.launch.py, used to launch all models. Check out the Ultralytics page for more details.

Lifecycle Nodes

Previous updates add Lifecycle Nodes support to all the nodes available in the package. This implementation tries to reduce the workload in the unconfigured and inactive states by only loading the models and activating the subscriber on the active state.

These are some resource comparisons using the default yolov8m.pt model on a 30fps video stream.

State CPU Usage (i7 12th Gen) VRAM Usage Bandwidth Usage
Active 40-50% in one core 628 MB Up to 200 Mbps
Inactive ~5-7% in one core 338 MB 0-20 Kbps

YOLO 3D

$ ros2 launch yolo_bringup yolov8.launch.py use_3d:=True

Demos

Object Detection

This is the standard behavior of yolo_ros which includes object tracking.

$ ros2 launch yolo_bringup yolo.launch.py

Instance Segmentation

Instance masks are the borders of the detected objects, not all the pixels inside the masks.

$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-seg.pt

Human Pose

Online persons are detected along with their keypoints.

$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-pose.pt

3D Object Detection

The 3D bounding boxes are calculated by filtering the depth image data from an RGB-D camera using the 2D bounding box. Only objects with a 3D bounding box are visualized in the 2D image.

$ ros2 launch yolo_bringup yolo.launch.py use_3d:=True

3D Object Detection (Using Instance Segmentation Masks)

In this, the depth image data is filtered using the max and min values obtained from the instance masks. Only objects with a 3D bounding box are visualized in the 2D image.

$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-seg.pt use_3d:=True

3D Human Pose

Each keypoint is projected in the depth image and visualized using purple spheres. Only objects with a 3D bounding box are visualized in the 2D image.

$ ros2 launch yolo_bringup yolo.launch.py model:=yolov8m-pose.pt use_3d:=True