raulmur / ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Other
9.3k stars 4.7k forks source link

Localization is inaccurate! #676

Open DodsGTH94 opened 5 years ago

DodsGTH94 commented 5 years ago

Hi @raulmur, I'm working right now with your ORB_SLAM2, using a RGB-D camera, namely the Intel Realsense D435. According to other threads and the paper, I set the parameters of the camera, given by the calibration tool of Intel and then published in the "/camera/color/camera_info" topic, in the .yaml file and I left all other parameters as default.

%YAML:1.0 Camera.fx: 617.6018676757812 Camera.fy: 617.703125 Camera.cx: 328.37994384765625 Camera.cy: 246.8520965576172 Camera.k1: 0.000000 Camera.k2: 0.000000 Camera.p1: 0.000000 Camera.p2: 0.000000 Camera.k3: 0.000000 Camera.width: 640 Camera.height: 480 Camera.fps: 30.0 Camera.bf: 30.880934 Camera.RGB: 1 ThDepth: 55.0 DepthMapFactor: 1000.0 ORBextractor.nFeatures: 1000 ORBextractor.scaleFactor: 1.2 ORBextractor.nLevels: 8 ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 Viewer.KeyFrameSize: 0.025 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize: 2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500

Then I modified a little the "ros-rgbd.cc" file to make it publish the camera position in real time. The lines I added where just copied from the SaveTrajectoryTUM" function to print only the translation vector. For now, I don't care about orientation.

//Print the traslation vector cv::Mat Tcw = SLAM.TrackRGBD(imRGB,imD,tframe); if (Tcw.empty()) cout << "Tcw is empty! "<< endl << endl; else { cv::Mat Rwc = Tcw.rowRange(0,3).colRange(0,3).t();//Extract the rotation matrix from the total one cv::Mat twc = -Rwc*Tcw.rowRange(0,3).col(3);//extract the traslation vector, but with the correct orientation //cout << "Tcw = "<< endl << " " << Tcw << endl << endl; //cout << "Rwc = "<< endl << " " << Rwc << endl << endl; cout << "twc = "<< endl << " " << twc << endl << endl; }

Now, when I test it and I move the camera around, I build a map of a room, closing a loop and the maximum distance, which should be approximately 6,4 m is 4,8, so I have a big error.

Can anyone help me out with this? Thank you!!

lapake commented 5 years ago

Hello,

I am working with your same setup and I also found this problem. Since the localization mode does not show any data, I modified the code also (inspired by yours) but to write the information to a file. Note that I noticed about this inaccuracy because I am running the modified version with point-cloud viewer (https://github.com/gaoxiang12/ORBSLAM2_with_pointcloud_map) and the result was a complete disaster :)

If you are interested in the tests results (thats why I needed the data in a file :) ) that I did to prove that the accuracy of this setup is good enough (estimation error of around 3 cm in a movement of 1.5m) let me know and I will post the results here for anyone' interest. Now I will go to the point; some considerations I hardly found over here and might contribute to this (oriented to both you and public in general):

1 - The topics are: "/camera/color/image_raw" and "/camera/aligned_depth_to_color/image_raw". Modify this in ros_rgb.cc.

2 - You have to be sure of the intrinsic parameters of the camera. To do so running "echo "/camera/color/camera_info"" will provide you the information.

3 - Be sure the YAML file that you are addressing when running ORBSLAM2 fit the parameters obtained in point 2.

4 - When running the camera node, I do "roslaunch realsense2_camera rs_rgbd.launch". You have to be sure that this also has the parameters according to the resolution that the camera is using. Modify rs_rgb.launch to be sure it runs on the resolution that the camera is using. The point is to be consistent in this side, the camera side and the ORBSLAM side.

5 - You can run the intel viewer tool to generate a JSON file. For example, change the configuration to hand held camera and then save the JSON file. You can then, in the rs_rgbd.launch code, load this configuration file for optimum results.

6 - Keep digging into the rs_rgbd.launch file. It provides even processing modules that you can activate.

I hope I did not beat around the bush too much. I also found lack of information when trying to obtain good tracking and mapping with this camera.

Good luck.

wwtu commented 5 years ago

Hello,

I am working with your same setup and I also found this problem. Since the localization mode does not show any data, I modified the code also (inspired by yours) but to write the information to a file. Note that I noticed about this inaccuracy because I am running the modified version with point-cloud viewer (https://github.com/gaoxiang12/ORBSLAM2_with_pointcloud_map) and the result was a complete disaster :)

If you are interested in the tests results (thats why I needed the data in a file :) ) that I did to prove that the accuracy of this setup is good enough (estimation error of around 3 cm in a movement of 1.5m) let me know and I will post the results here for anyone' interest. Now I will go to the point; some considerations I hardly found over here and might contribute to this (oriented to both you and public in general):

1 - The topics are: "/camera/color/image_raw" and "/camera/aligned_depth_to_color/image_raw". Modify this in ros_rgb.cc.

2 - You have to be sure of the intrinsic parameters of the camera. To do so running "echo "/camera/color/camera_info"" will provide you the information.

3 - Be sure the YAML file that you are addressing when running ORBSLAM2 fit the parameters obtained in point 2.

4 - When running the camera node, I do "roslaunch realsense2_camera rs_rgbd.launch". You have to be sure that this also has the parameters according to the resolution that the camera is using. Modify rs_rgb.launch to be sure it runs on the resolution that the camera is using. The point is to be consistent in this side, the camera side and the ORBSLAM side.

5 - You can run the intel viewer tool to generate a JSON file. For example, change the configuration to hand held camera and then save the JSON file. You can then, in the rs_rgbd.launch code, load this configuration file for optimum results.

6 - Keep digging into the rs_rgbd.launch file. It provides even processing modules that you can activate.

I hope I did not beat around the bush too much. I also found lack of information when trying to obtain good tracking and mapping with this camera.

Good luck.

I also encounter this problem, and I find I have to calibrate the parameter "DepthMapFactor", otherwise the localization values scales to a larger or less values. I am not quite get your points5&6, would you please explain them in details? It would be nice if you can share your modified rs_rgbd.launch file.