Open agillies8 opened 3 days ago
Actually - i think this may have to do with inverted image and on RGB->BGR image. I wonder if i somehow trained on RGB images, but they are being fed as BGR images at runtime, since the color red and blue are defining features of the classes
Ok, further investigation:
In your yolo.py file:
def image_cb(self, msg: Image) -> None:
if self.enable:
# convert image + predict
cv_image = self.cv_bridge.imgmsg_to_cv2(msg)
results = self.yolo.predict(
source=cv_image,
verbose=False,
stream=False,
conf=self.threshold,
iou=self.iou,
imgsz=(self.imgsz_height, self.imgsz_width),
half=self.half,
max_det=self.max_det,
augment=self.augment,
agnostic_nms=self.agnostic_nms,
retina_masks=self.retina_masks,
device=self.device,
)
CV2 might open images into BGR format, but then that gets passed right to yolo.predict, which expects rgb. See this:
[https://github.com/ultralytics/ultralytics/issues/9912]( )
import cv2
from ultralytics import YOLO
# Load the model
model = YOLO('yolov8n.pt')
# Read an image (in BGR format)
image_bgr = cv2.imread('path/to/image.jpg')
# Convert from BGR to RGB
image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
# Perform inference
results = model(image_rgb)
yep, fixed it, that did the trick!
@agillies8 thanks for the fix. I'll add the cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
line to fix it.
K, made a PR with the changes, here's it working now:
I have a yolov8 model which I trained with roboflow (exported the best.pt model weights after training in a colab notebook). The model has 3 classes: 0: blue box 1: green box 2: red box
I am able to run my model with this package, and everything seems to be working great. Im streaming images generated from Isaac Sim of 'boxes' going by on a conveyor. Im using this launch command inside the yolo_ros docker container:
ros2 launch yolo_bringup yolo.launch.py model:=src/models/best640.pt input_image_topic:=/camera/rgb
And here is the results, seen from rviz:
https://www.youtube.com/watch?v=1G3n6le86cw
However - as you can see, for some reason the classes seem to be inverted - red boxes are labeled as blue, and vice-versa, while green boxes are labeled correctly. When I test the model on roboflow, the classifications are correct. Any idea why this may be happening?