introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
920 stars 549 forks source link

Demo unitree but for just one camera #1093

Open nicolasukd opened 6 months ago

nicolasukd commented 6 months ago

https://github.com/introlab/rtabmap_ros/blob/master/rtabmap_demos/launch/demo_unitree_quadruped_robot.launch

How would it be this code but just for one camera? the camera_face, and also i dont know if there is some difference between simulation and a real robot, cause im having trouble with the frame_id and im working with the real robot.

matlabbe commented 6 months ago

You would have to change rgbd_cameras to 1, and add <remap from="rgbd_image" to="/camera_face/rgbd_image"/> in odometry, rtabmap and rtabmap_viz nodes. You may remove ******Decimation parameters.

For the frame_id, I don't know how close the simulation is from the real robot, but it should be the base frame of the robot, like base_link or base_footprint.

NicolasJorquera commented 6 months ago

ok, thank you. The issue is that i dont see any base frame like that (base_link or base_footprint), it should be like a topic or node?. And in the case that the robot doesnt create this base frame, how can i create it?

matlabbe commented 6 months ago

You can do:

rosrun tf2_tools view_frames.py

to export the Tf tree. If there are no frames, then there is an issue with the robot bringup. A robot without static transforms linking the sensors to base frame is not a correct setup for ROS.

nicolasukd commented 5 months ago

<remap from="rgbd_image" to="/camera_face/rgbd_image"/>

I think that remap that you are doing there is expecting for me to have an only rgbd_image topic. But i have a color, depth and info topics. The problem is that the launch file is working for three rgbd cameras (3 cameras 3 topics), and i only need it for 1 camera 3 topics. i want something like these: image but that launch file keeps throwing me error like this:

image

All the topics that i receive from the robot:

image

And the /tf, /tf_static and /initial_pose doesnt receive anything. Should i create my own frame like these?

matlabbe commented 5 months ago

Well, it seems you kept correctly the rgbd_sync outputting the /camera_face/rgbd_image that can be connected to other rtabmap nodes with the other remap I posted previously.

Form the log, the odometry is receiving the data, but it seems not able to track any features. Maybe there is something wrong with the images. Can you share a small rosbag containing those topics:

rosbag record /camera_face/depth /camera_face/color /camera_face/camera_info /tf /tf_static
nicolasukd commented 5 months ago

image Rosbag

In the other issue #1031 you mention that the robot is not giving any frames for all robot links. Well i might have found the script on the robot thats suposed to do that, here is the tf tree frames.pdf

With the frames fixed here are the errors when executing the launch file image

Also, if it helps this is what i get in my computer with rviz. image

matlabbe commented 5 months ago

The base frame of the robot would be base based on the tf tree. The tree looks strange in rviz, as the camera_face looks way off the robot:

Screenshot from 2024-01-06 15-16-40

The image doesn't look rectified. Are the cameras calibrated?

The camera_info also doesn't match the actual image:

header: 
  seq: 5301
  stamp: 
    secs: 1636355820
    nsecs: 700815880
  frame_id: "camera_face"
height: 400
width: 928
distortion_model: "narrow_stereo"
D: [-0.0850522996852782, 0.18483663877205245, -0.0008254170617288337, 0.0028799372589857638, 0.0]
K: [2580.0868051856605, 0.0, 1347.4193338392204, 0.0, 2581.2289148329596, 1034.4539881461978, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [2572.388916015625, 0.0, 1352.2652257760637, 0.0, 0.0, 2575.5068359375, 1032.8075442280096, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False

as the images are 232x200 and depth image is encoding: "bgr8" as it sohuld be 16 bits unsigned or float32 encoding format.

nicolasukd commented 5 months ago

To build the tf tree i use this urdf file URDF, i dont know if thats the one i have to use.

This is the sricpt i use to build the image topics and the camera_info topic: node_publisher.zip

This are the errors when i try with the rectified image and the depth image with "mono16", i think its something with the tf frame (i dont know whats wrong with that) image

Here is a new rosbag with the changes ive done: Rosbag

matlabbe commented 5 months ago

The tf missing is the one that rtabmap's odometry should be publishing, but right now the odometry cannot be computed because the input data is bad.

Try debugging your input images and camera_info using rviz's DepthCloud plugin. Here what your data looks like: Screenshot from 2024-01-07 19-14-00

The generated point cloud looks pretty bad. I looked in the camera_info topic and the K and P look not correctly set:

header: 
  seq: 163
  stamp: 
    secs: 1636360011
    nsecs: 785186573
  frame_id: "camera_face"
height: 400
width: 464
distortion_model: "narrow_stereo"
D: [-0.0850522996852782, 0.18483663877205245, -0.0008254170617288337, 0.0028799372589857638, 0.0]
K: [2580.0868051856605, 0.0, 1347.4193338392204, 0.0, 2581.2289148329596, 1034.4539881461978, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [2572.388916015625, 0.0, 1352.2652257760637, 0.0, 0.0, 2575.5068359375, 1032.8075442280096, 0.0, 0.0, 0.0, 1.0, 0.0]
nicolasukd commented 5 months ago

How did you know that the camera_info topic was wrong (exactly K and P)? I just copied the data from a file called unitree_camera.yaml on the face camera board. How can i set correctly the camera_info topic? or should i calibrate the camera first?

nicolasukd commented 5 months ago

How did you know that the camera_info topic was wrong (exactly K and P)? I just copied the data from a file called unitree_camera.yaml on the face camera board. How can i set correctly the camera_info topic? or should i calibrate the camera first (and with the values of the calibration i build the camera_info)?

In the Unitree camera SDK there is a example thats called example_getCalibParamsFile.cc. This is the output when i run this script on the face camera. I supposed that this data is the one i need to put on the camera_info topic, but in the outup file is split between right and left.

Here is what i see with the depth cloud plugin (this is what seeing) image

matlabbe commented 5 months ago

The ouput left/right calibration looks more what the rectified calibration should look like. Assuming the depth image is aligned with the left camera (is color image the left camera?), you can publish a camera_info with the data of the left camera.

nicolasukd commented 5 months ago

Output

LeftIntrinsicMatrix would be K parameter. LeftDistortionCoefficients would be D parameter, But I don't know what could be LeftRotation.

Do I need to fill those 4 parameters (K, P, D, R)?

matlabbe commented 5 months ago

You can use opencv stereoRectify() to recover the rectification and projection matrices: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga617b1685d4059c6040827800e72ad2b6

nicolasukd commented 5 months ago

Rosbag

Video of how i setup the robot and the pc

I tried a bunch of configurations for the camera_info matrices, but nothing seems to work. On the other hand i notice that the topics of the camera (/camera_face/color, /camera_face/depth and /camera_face/camera_info) are stable (1.5 for color and depth, and 3 for camera_info) when the launch file is not running, but then the rate drops when running the launch file, and drops to the point that sometimes skips the color and depth topics (in the video i run the launch file around the minute 5:30, and in the minute 6:00 you can see it skeeps both color and depth topics).

Other thing im doing know to generate a better tf_map, im running rosrun robot_state_publisher robot_state_publisher and rosrun joint_state_publisher joint_state_publisher (before i was only running the first one), by doing these im able to get the camera_optical_face frame just like in the simulation.

The last thing i would like to point out is that in the video around the minute 3:30 i run the launch file in my pc and after the rtabmapviz window i run the script to create the camera topics, by doing it in that order it works relatively good for 5 frames (first 5 frames that are sended). I would like to point out this because when i create the camera topics first and then launch the rtabmap_ros file, rtabmapviz just shows a red or black screen (minute 5:30).

Im assuming that this happens because by the time rtabmapviz opens the camera topics are already unsynchronised, but when i create the topics after rtabmapviz is running, those first frames are synchronized

if you think the same way, do you know of a way i can assure synchronized topics?

matlabbe commented 5 months ago

I would try first to debug why the frame rate is so low.

rosbag info 2024-01-14-14-22-31.bag 
path:        2024-01-14-14-22-31.bag
version:     2.0
duration:    12.6s
start:       Jan 14 2024 09:22:31.96 (1705252951.96)
end:         Jan 14 2024 09:22:44.55 (1705252964.55)
size:        72.5 MB
messages:    328
compression: none [35/35 chunks]
types:       sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
             sensor_msgs/Image      [060021388200f6f0f447d0fcd9c64743]
             sensor_msgs/JointState [3066dcd76a6cfaef579bd0f34173e9fd]
             tf2_msgs/TFMessage     [94810edda583a504dfda3829e70d7eec]
topics:      /camera_face/camera_info    41 msgs    : sensor_msgs/CameraInfo
             /camera_face/color           7 msgs    : sensor_msgs/Image     
             /camera_face/depth          27 msgs    : sensor_msgs/Image     
             /joint_states              126 msgs    : sensor_msgs/JointState
             /tf                        126 msgs    : tf2_msgs/TFMessage    
             /tf_static                   1 msg     : tf2_msgs/TFMessage

When the number of depth images is not equal to number of color images, rtabmap or even rviz's depth cloud plugin with have a hard time to synchronize the frames, and they would mostly drop all of them. Avoid remote topic streaming if possible to debug this, work directly on the computer attached to the camera.

The depth image looks also wrong (camera_info issue or depth values are wrong, note that the grid shown has 1 meter cell size!): Peek 2024-01-18 20-06

If at least rviz's depth cloud plugin cannot display smoothly your data, rtabmap won't work. image

nicolasukd commented 5 months ago

Rosbag

I lower the camera resolution and it fixed the topics rate problem (i tested with ethernet cable connected to the robot and it gives the same results).

For some reason Rtabmap keeps failing.

Errors from rtabmap

Do you know of other way i can do this? i just need the map, and doesnt even have to be with color. Im thinking that i could do something with /odom and /depth, but first i need to get the /odom of the robot. Do you think i can create the map of a room just with /odom and /depth? cause im really having trouble with the /camera_info topic

matlabbe commented 5 months ago

The input data look wrong: Peek 2024-01-27 17-23

image

rtabmap's odometry is giving errors/warnings that it cannot track features, which seems right as the input is crap.

nicolasukd commented 5 months ago

What do you mean with the input is crap. The only topic I would say it's wrong is the /camera_face/camera_info, or would you say that color or depth are also wrong?

I've noticed that in the simuation although the rtabmap works great, the rviz's depth cloud plugin doesn't show up right. image

I tried once to recover the matrices values with this function, but it didn't work. Do you recommend keeping trying with this function? cause its supposed to give the matrices i need for the camera_info. Get Calibration params

nicolasukd commented 5 months ago

Ive noticed that the /tf tree doesn't update. If i move the robot, the positions/rotation doesn't change, i would hope that at least the /trunk joint would change. I'm thinking that if this doesn't work, the visual odometry cant really work.

matlabbe commented 5 months ago

I guess the PERPECTIVE matrix would be the one to set in P matrix of the camera_info published along your images. What is your code so far to acquire and publish the images?

What is the camera calibration parameters you get from that function?

nicolasukd commented 5 months ago

Rosbag record -a

I guess the PERPECTIVE matrix would be the one to set in P matrix of the camera_info published along your images.

This is the code for the creation of the camera_info creation. /camera_info creation, this is based on this way to get those matrices.

What is your code so far to acquire and publish the images?

This is the code to acquire and publish the color and depth images (also i publish the camera_info in this picture). publish /color and /depth

This is the complete code if you want to take a look. nodeCamera.zip

Something weird happend with rtabmap (its like the odometry went crazy) image

What is the camera calibration parameters you get from that function?

There is other function in that page that instead of returning the matrices, it save them in a yaml file. For this camera it returns this file.

matlabbe commented 4 months ago

That is the yaml file:

%YAML:1.0
---
SerialNumber: !!opencv-matrix
   rows: 1
   cols: 1
   dt: d
   data: [ 0. ]
PositonNumber: !!opencv-matrix
   rows: 1
   cols: 1
   dt: d
   data: [ 1. ]
RectifyFrameSize: !!opencv-matrix
   rows: 1
   cols: 2
   dt: d
   data: [ 232., 200. ]
LeftXi: !!opencv-matrix
   rows: 1
   cols: 1
   dt: d
   data: [ 5.3073391800727254e-01 ]
LeftIntrinsicMatrix: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.8771840095535626e+02, -2.0753244667352710e-01,
       2.2699825096779537e+02, 0., 1.9285586610634437e+02,
       1.8874181905372117e+02, 0., 0., 1. ]
LeftDistortionCoefficients: !!opencv-matrix
   rows: 1
   cols: 4
   dt: d
   data: [ -2.1092920831259651e-01, 2.8112929058274939e-02,
       2.6909017264216792e-03, 1.9247502294209545e-03 ]
LeftRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ -9.9997554730994254e-01, 6.9883110500068820e-03,
       2.6132517915822132e-04, -6.9883112886257428e-03,
       -9.9997558145453402e-01, 0., 2.6131879797745264e-04,
       -1.8262216995135428e-06, 9.9999996585457485e-01 ]
RightXi: !!opencv-matrix
   rows: 1
   cols: 1
   dt: d
   data: [ 5.3392936826508908e-01 ]
RightIntrinsicMatrix: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.8745084898519508e+02, 2.3583396754926955e-01,
       2.3420634881802744e+02, 0., 1.9256894620907070e+02,
       1.9515974049790600e+02, 0., 0., 1. ]
RightDistortionCoefficients: !!opencv-matrix
   rows: 1
   cols: 4
   dt: d
   data: [ -2.1189321333897418e-01, 2.8116958620687974e-02,
       1.3066204675950433e-03, 9.9040048477168504e-04 ]
RightRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ -9.9997894738474136e-01, 6.1579818240574515e-03,
       2.0454943559003844e-03, -6.1556637831335836e-03,
       -9.9998040662965748e-01, 1.1376116096352374e-03,
       2.0524596693869052e-03, 1.1249962844104855e-03,
       9.9999726089258190e-01 ]
Translation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [ 2.4712695375753881e+01, -1.5217627490965985e-01,
       3.7436779739118728e-02 ]
Reserved: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ]

It looks like the input to opencv stereoRectify() function to recover rectification R matrix and projection matrix P that could be set in camera_info in corresponding fields. Note sure what left rotation and Xi mean though.

However, the camera_info you are using would "work" at worst with a scale issue. I think the problem is coming form the depth format. I am not sure to understand from the API description of getDepthFrame() what means:

@param[in] color true: depth is color image, false:depth is gray image

I would expect depth to be either in meters or in mm, not a color or grayscale. That could explain why the point cloud is so poor. The depth values don't represent a distance.

Peek 2024-02-03 14-23 Screenshot from 2024-02-03 14-22-27

Maybe ask on their github what is the actual format of the depth image.

k0012 commented 2 weeks ago

Hello, @nicolasukd did it work in the end? i am now working on unitree go1 dog robot and want to apply vslam just like you