bmegli / ev3dev-mapping-ui

A cross-platform real-time 3D spatial data visualization
GNU General Public License v3.0
9 stars 4 forks source link

Volumetric 3D Mapping in Real-Time / LSD-SLAM #1

Closed ghost closed 8 years ago

ghost commented 8 years ago

Hi Bartosz,

Hope you are doing all well !

I had a quick question about ev3dev-mapping-ui.

Is it possible to achieve some fast Volumetric 3D Mapping in Real-Time like TUM's Fastfusion or LSD-SLAM ?

Links: https://github.com/tum-vision/fastfusion https://www.youtube.com/watch?v=X0hx2vxxTMg

That would be awesome !

Cheers, Luc

bmegli commented 8 years ago

Hi @lucmichalski !

First the Volumetric 3D Mapping in Real-Time on a CPU.

The short answer is no. The long answer is maybe, but...

ev3dev-mapping-ui with ev3dev-mapping-modules uses low-cost hardware which has some shortcomings. Taking dense high-quality scan takes a lot of time (the screen-recordings in youtube are played at 16x speed, one at 32x).

So even having fast 3D mesh reconstruction like in "Volumetric 3D..." will not make the process fast.

Second problem is different type of data. In "Volumetric 3D" it's RGB-D image data, in ev3dev-mapping monocolor incremental scan (it's not even in single plane when the robot is moving).

It doesn't mean that interesting results can't be obtained. Both Voxel representation from "Volumetirc 3D..." and real-time meshing with marching cubes could be probably adapted giving mesh instead of point cloud which would be great.

Finally ev3dev-mapping-ui is not a SLAM (at this point). This means that robot position estimate error grows unbounded in time which will affect quality of longer scans.

There's more to tell, laser scan can be merged with camera data, SLAM can be added for XZ plane laser and many other things can be done, but it's either original research or a lot of work.

Kind regards!

P.S. LSD-SLAM later, both are great btw

bmegli commented 8 years ago

Hi again @lucmichalski ,

As for the LSD-SLAM (Semi-Dense Visual Odometry for a Monocular Camera) the nature of the data is again completely different.

It's image data in LSD-SLAM. In ev3-dev-mapping, as the robot moves, the XY plane (vertical) laser scans new planes. There are no common points between the scans which could be used for reference (as in LSD-SLAM).

For the XZ plane (horizontal) laser there are common points, but there are simplier and more effective methods exploiting the nature and error model of laser beams, for example Weighted Range Sensor Matching Algorithms for Mobile Robot Displacement Estimation

It all doesn't mean that those things can't be combined in interesting way (e.g. taking a laser scan of environment and then using this data as prior knowledge for something like LSD-SLAM or adding camera to the robot) but it all lies in the area of original research (read months of work).

Kind regards!

bmegli commented 8 years ago

So summing up:

Both methods have different kind of data and "mode" of work.

ev3dev-mapping builds up incrementally from data in slow precise scan. Volumetric 3D mapping refines map precision over time. LSD-SLAM references previous images for visual odometry which is impossible for ev3dev-mapping.

One thing that seems directly applicable is Voxel representation and real-time 3D meshing which would directly integrate with Unity mesh model, colission detection and navigation features.

To really benefit it would be necesssary to add SLAM layer for XZ plane laser so that pose errors don't grow unbounded, and XY plane laser could use that position information when integrating readings.