isl-org / Open3D

Open3D: A Modern Library for 3D Data Processing
http://www.open3d.org
Other
11.38k stars 2.29k forks source link

Open3D for SLAM application #220

Closed HTLife closed 6 years ago

HTLife commented 6 years ago

I'm evaluating that whether to build my SLAM research upon Open3D. The python example demonstrates on official site seems that it's more like an offline fashion. One major consideration of SLAM application is the real-time capability.

qianyizh commented 6 years ago
  1. We haven't implemented a GPU backend because we don't feel a need to do so. The current CPU backend seems to be fast enough for the applications we have tested. And the current CPU backend is highly optimized and actually quite fast. If there is a need for adding a GPU backend we will consider.

  2. There is no noticeable performance difference between C++ and python binding. The python binding is only a wrapper. The backend is all implemented in C++.

HTLife commented 6 years ago

@qianyizh Thank you for the reply. If the performance of Open3D python binding is not a barrier, that might be suitable for some reconstruction work.

My concern is that if I use the python binding, the other component of my code could be slow due to the interpreted design of python. If the goal is to reach the 30fps real-time processing, the effect could be significant. And it's easier to combine CUDA code with C/C++.

For these reasons, I would like to start learning how to use the C++ version of Open3D. Thanks for open-sourcing this great work!

mayankamedhe commented 6 years ago

@HTLife I am curious about how you intent to apply Open3D in SLAM. Actually, I wish to make a precise volumetric 3D map from the SLAM output. But, I am not getting any good resource online. Can you please tell me some good references? Thanks in advance!

HTLife commented 6 years ago

@mayankamedhe What format did your SLAM system output? (or what info. that map contains?)

mayankamedhe commented 6 years ago

@HTLife I am using ORB-SLAM2. It is giving the camera trajectory and Keyframe trajectory as output. The trajectory information contains timestamp, x, y, z coordinates (position of the optical center of the color camera with respect to the world origin as defined by the motion capture system) and orientation of the optical center of the color camera in form of a unit quaternion with respect to the world origin as defined by the motion capture system (qx qy qz qw)

HTLife commented 6 years ago

@mayankamedhe Your data is sufficient to make a simple "point cloud map". However, as you mention you want a volumetric map, you should handle the frame fusion algorithm and volumetric data structure by yourself. It's easier to do so by reference some well known open source project. Ex. Kinect fusion, Kintinuous.

I also found tum-vision team provide a tool called - fastfusion. This tool seems to match your requirement.

mayankamedhe commented 6 years ago

@HTLife Thanks a lot!