Open metanav opened 1 year ago
Hi,
You can check out current URDF representation here
For connecting cameras to the robot you'll want to use parent_frame
(which defaults to oak-d-base-frame
). As for the configuration, currently these represent RPY orientation of base frame
to the parent frame
. See how multicamexample for
depthai_ros_driver` uses those parameters.
Also, are you using mobile robot or manipulator? In case of manipulator you might also want to check out hand-eye calibration
Thanks, @Serafadam. I think I'm using the same URDF. By the way, I am using a manipulator with the camera mounted overhead facing to each other at roll = 0, pitch = 35 degree, yaw = 180 degree.
I was able to connect the manipulator'sbase_link
to the OAK-D using the following command line.
$ ros2 launch depthai_examples yolov4_publisher.launch.py cam_pos_x:=0.90 cam_pos_y:=0 cam_pos_z:=0.70 \
cam_roll:=0.0 cam_pitch:=0.610865 cam_yaw:=3.14159 parent_frame:=base_link
Now the problem is the transform seems not working from the oak_rgb_camera_optical_frame
to base_link
. The values are not correct. Also, I noticed in the output of the /color/yolov4_Spatial_detections
topic, the position.y
value is negative but the oak_rgb_camera_optical_frame
Y-axis is downward so it should be positive if the detected object lies within the positive Y -axis which is the case for the output below.
---
header:
stamp:
sec: 812
nanosec: 529694938
frame_id: oak_rgb_camera_optical_frame
detections:
- results:
- class_id: '41'
score: 0.0
bbox:
center:
position:
x: 276.0
y: 227.0
theta: 0.0
size_x: 42.0
size_y: 0.0
position:
x: 0.1272372156381607
y: -0.035740792751312256
z: 1.0023199319839478
is_tracking: false
tracking_id: ''
---
Can you display whole TF tree in Rviz?
Below is the whole TF tree in the Rviz. I have removed the name since it was not readable.
@Serafadam Can you check what is the pose of the detected object with respect to base_link frame for the above setup which you can create easily in your environment? I think the TF conversion issue is with the tilted camera only. I have checked other projects but they have the camera in straight position.
There is some improvement after changing the tilt angle to 20 degrees, introducing some background with textures, and elevating the objects a bit from the background (floor in my case). In the base_link frame, the y position seems close and the x position with a 15-20 mm error with different distances. The z position is not accurate, maybe due to the spatial calculation in the detected bounding box.
Could you do a small verification on what is the position of the detected object in ROS vs Depthai python demo? As for the camera rotation, maybe you can try setting rotation arguments to zero and using static transform publisher to tie camera to base link?
@Serafadam, yolov4_publisher.launch.py starts robot state publisher which also initiates tf_static publisher. I have used depthai-python/examples/SpatialDetection/spatial_location_calculator.py to compare.
spatial_location_calculator:
x: -0.154m, y: -0.072m, z: 0.642m
depthai-ros:
x: -0.122m, y: -0.085m, z: 0.643m
I think the difference in 'x' is due to the mono right vs RGB center reference frame, otherwise, they are almost similar. If I set rotation arguments to zero, the X axis in the camera frame and the Z axis in the base link frame have correct transformation values but the transformed Z axis in the base link frame is wrong.
It might be possible that the spatial calculator is not returning correct 'y' values in the oak_rgb_camera_optical_frame frame. Can you check it?
I did some testing with roll = 0, pitch = 0 and yaw= 3.14159. So there is no tilt in the testing except the camera is rotated 180 degrees to face the base_link as shown in the Rviz in the previous posts. I strongly feel that oak_rgb_camera_optical_frame's
Y
coordinate value does not correspond to the axis direction and seems shifted as well. I guess spatial calculation is just being assigned and not considering the axis. This could be the root cause for the wrong TF transform for Y that corresponds to the Z in the base_link frame. Also, for a few detections the transformed Y values seem fine so for a given short range it works but not for all variations in the detected positions. Please verify.
Maybe the tilt angle is not correct as I am measuring. I tried to set the angle with the results from the IMU rotation vector depthai-python example but the quaternions seems not accurate. Is there a way to calibrate IMU?
Hello @metanav you mind if i asked how you were able to connect your transform tree of panda and oak-d-lite camera, I'm trying to transform the camera frame to base frame for a pick and place operation
Using the params; parent_frame:=base_link.
Where will i do that, this is how my tree looks like
Maybe try with parent_frame:=panda_link7. You should pass the params to your launch file.
For example:
$ ros2 launch depthai_examples yolov4_publisher.launch.py cam_pos_x:=0.90 cam_pos_y:=0 cam_pos_z:=0.70 \
cam_roll:=0.0 cam_pitch:=0.610865 cam_yaw:=3.14159 parent_frame:=panda_link7
I can see something like this now, which means it is connected
I have placed my camera on the end-effector how can I calculate the camera parameters in relation to that?
I am trying to connect a robot's base_link (fixed) to the OAK-D's
oak-d_frame
. I can calculate the base_link distance from the camera in the 3D manually. Which is the reference frame,oak-d_frame
oroak_rgb_camera_optical_frame
?Also, what is the following configuration for? Is it relative to the base_link or I can use the orientation from the IMU (after converting quaternion to Euler)?
depthai_examples/launch/yolov4_publisher.launch.py: