Closed alt1-m8 closed 7 months ago
Hi @alt1-m8, have you checked the QoS of your topics as mentioned in #15? This package uses the default QoS profile for sensor data.
Hi @mgonzs13 thanks for the quick reply. I have checked the Qos topics (see attached picture),
This image shows the topic info for the image_raw topic published by the real sense node. It's 'RELIABLE'. If I understand correctly, 'BEST_EFFORT' is required in its place. Is there a way to specify this in the cli during launch of the realsense, or is this a setting that must be applied in the launch file, or within the ROS settings?
I have added some QoS parameters to the nodes and launch files to configure the reliability of the topics. So now you can change the QoS of the nodes of this package instead of the QoS of your camera. Take a look at the README. You can change the reliability by adding image_reliability:=1
, depth_image_reliability:=1
and depth_info_reliability:=1
when you run the launch (0=system default, 1=Reliable, 2=Best Effort)
.
Thanks for the quick update. I have launched yolov8-ros with the new changes and the reliability adjustments on cli. The ros2 topic info --verbose shows yolov8-ros and the realsense as RELIABLE. However, I still haven't gotten a stream. Just to check, upon launching the yolov8-ros, should the stream with the detections show up as a pop-up window, or must RVIZ2 be launched separately? I have used RVIZ to visualise all the topics, and all streams show as no image with the exception of image_raw.
Yes, you have to open rviz2 to visualize the /yolo/dbg_image
. You can also echo the topics /yolo/detections
and /yolo/tracking
.
I've ran the ros2 node successfully and is able to get the output image by subscribing to /yolo/dbg_image
image topic on rviz2. My question is, how do I get the depth data like you did in the rviz2 by simply running $ ros2 launch yolov8_bringup yolov8.launch.py
? I've looked into the code and the py file only seems to be subscribing to a type sensor::msg::Image
topic. I'm using a monocular camera with no depth information whatsoever, is it possible to derive depth information only using yolov8? If possible, how can it be implemented?
@Unknown9190, in the examples of the README, I am using an RGB-D camera, that publishes color and depth images and the point cloud. The yolov8.launch.py
launch only runs the nodes to apply YOLOv8, the tracking and the debug. For the 3D option, you have to use yolov8_3d.launch.py
, which runs the 3D node subscribing to the depth image and camera info.
If you want depth images from monocular images, you can check solutions like the ones presented in Papers With Code.
@mgonzs13 Understood, I'll try to implement it into my ros2 project, thanks for the help!
Hey @Unknown9190, how is this going?
I had similar issue and in my case it turned out my gpu wasn't available, but the error message took a very long time to show up for whatever reason. Luckily I left it running for more than 10 minutes one time and it finally did. No idea why it took so long to notify me, it starts working almost immediately when I switched the device to cpu.
Hi,
I have a similar problem. I have the input_image_topic remapped to /camera/color/image_raw. Ros topic list shows 1 publisher and 3 subscribers. I have attempted to visualise with rviz2. The /yolo/dbg_image streams fine, but with no detections. Changing to /yolo/detections shows a 'No Image' error.
I'm assuming that upon launching within terminal with 'ros2 launch yolov8_bringup yolov8.launch.py model:=best4.pt input_image_topic:=/camera/color/image_raw' the stream should start up by itself with the bounding box detections, but I only see 'Model summary (fused): 168 layers , 1125971 parameters, 0 gradients, 28.4 GFLOPS'
Device: Nvidia Jetson Orin Nano Camera: Realsense D455 ROS-distro: Foxy model: custom trained (best.pt) input_image_topic: /camera/color/image_raw
How can I get the image?
Thanks