Closed KXRLYM closed 7 months ago
Hi @KXRLYM, I am trying to reproduce the error but I can't. Which version of message_filters are you using? Btw, you can try the non-3D version.
It should be humble? Not sure which version .. how would I check that
As it is a python package, you can check the version with pip3 show message_filters
.
Here's the output for that
Name: message-filters
Version: 4.3.3
Summary: UNKNOWN
Home-page: UNKNOWN
Author:
Author-email:
License: UNKNOWN
Location: /opt/ros/humble/local/lib/python3.10/dist-packages
Requires:
Required-by:
Well, that's the same version I have. Are you using the last commit of this repo?
Okay I think pulling the latest one fixed it but I am still having issues not getting the image on RVIZ2 similar to #15 I've also made QoS for all depth image, camera info and rgb image best effort. It works on 2d but I want to get a 3d pose so I am trying 3d now with Kinect depth camera. I can visualise both depth and rgb on Rviz but not /yolo/dbg_image.
Any help would be appreicated!! Also now I'm thinking that maybe is it a problem if depth and rgb images are of different sizes and camera setting?
+++ Edit Still the same issue with the depth image aligned with the rgb image...
Have you set the QoS for the camera info topic? On the other hand, the 3D node uses a frame to transform the 3D data from the depth image frame to the base frame. By default this frame is base_link. Check if you have this frame in your tf tree.
Yeah I have set QoS for camera info as `self.info_pub = self.create_publisher(CameraInfo, '/depth/camera_info_formatted', qos_profile_sensor_data)' Not too familar with the transform but the following is the tf tree if that looks alright to you..
So, in your case, the camera_base seems the base frame of the tf tree. You have to add target_frame:=camera_base
when you run the launch.
Thanks for your response, unfortunately still no image is coming through :(( I've also tried the reliability parameter option as well - nice update though.
Check if you are getting messages in the topics /yolo/detections
, /yolo/tracking
and /yolo/detections_3d
. Have you tried to use the no-formatted topics?
Hi, yes some coordinates are being echoed in the topics /yolo/detections
and /yolo/tracking
but nothing in /yolo/detections_3d
. I've tried using non formatted topics with your new reliability parameter setting but no luck with the images.
Can you share a rosbag of your camera and tf topics?
Will get onto it soon. On a different node though, I noticed that all the detection messages and tracking messages don't have a timestamp. I was trying to subscribe to the detection message from my custom node and use message_filters.ApproximateTimeSynchronizer
but it didn't work (i.e. not entering callback function at all even though I was getting messages) because of the detection messages having no timestamp. When I manually added a timestamp to your source code with self.get_clock().now().to_msg()
, message filter worked but all of sudden I was no longer receiving images from /yolo/dbg_image
.. Maybe It has some correlation to this problem?
++ Also added timestamp to all of your Ros2 messages but still couldn't recover yolo/dbg_image
. I have removed the timestamp and now it's showing the images again but I'd love to also use a message_filter on detections too..
The header of the messages, that is the timestamp and the frame_id, is the same as the header of the camera messages. So check also if the messages of your camera have a timestamp.
True, so rgb/image_formatted
was missing timestamps so I have fixed that. I've made sure that both depth image and cam info have correct header as well. The outcome is really strange though, I was able to receive images from yolo/dbg_image
by launching a 3d node at one point but I am no longer able to all of sudden. I feel like its a data syncing issue. When I did manage to echo, the image was missing a header as below:
ros2 topic echo /yolo/dbg_image
header:
stamp:
sec: 0
nanosec: 0
frame_id: ''
height: 1536
width: 2048
encoding: bgr8
is_bigendian: 0
step: 6144
data:
- 54
- 68
- 64
- 70
- 84
- 80
- 74
- 85
The reason why I am using /rgb/image_formatted is because when I was launching yolo with a direct output from the camera, it threw me an error saying '3 channel expected but 4 channels are given' or something like that. I figured I had to change the encoding of the image output from the camera 'bgra' to 'bgr' and I was able to run the code.
I have tried relaxing the 'slop' parameter in ApproximateTimeSynchronizer
for syncing depth, cam info and detections and now the yolo/dbg_image
is coming through but with no detection markers. There are some coordinates printing in yolo/detections
,yolo/tracking
but nothing in yolo/detections_3d
and '\tf'
Hey @KXRLYM, how is this going?
Hi, using Azure Kinect RGB-D to do 3d human pose detection but I am getting the following error this time.
Thank you