ethz-asl / segmap

A map representation based on 3D segments
BSD 3-Clause "New" or "Revised" License
1.06k stars 394 forks source link

Using segmatch with different backend #48

Open koko19 opened 7 years ago

koko19 commented 7 years ago

Hi, We have devloped our own SLAM back-end and would like to use segmatch just to retrieve loop closings and pose between them in SE3. I have been going through the code and see that segmatch is coupled with your own SLAM system and that it uses current pose when segmenting each point cloud. Is it possible to run segmatch in standalone version in which it would simply process each point cloud in its local frame and find loop closings between them?

vkee commented 7 years ago

SegMatch is pretty intertwined with its dependencies so I think it would be quite a bit of work to do what you want (but I think it is possible).

@rdube?

rdube commented 7 years ago

Hi @koko19 what you are asking is definitely possible! Given your two clouds you can select one to be the target and the other one the source. Then you can simply use the functions SegMatch::processAndSetAsTargetCloud() and processAndSetAsSourceCloud(). See here: https://github.com/ethz-asl/segmatch/blob/master/segmatch/src/segmatch.cpp#L94

Then SegMatch::findMatches() and SegMatch::filterMatches() can be used to retrieve the loop closures. You can follow this example in the worker : https://github.com/ethz-asl/segmatch/blob/master/segmatch_ros/src/segmatch_worker.cpp#L114

For now you will have to compile the laser_slam and GTSAM dependencies which should not cause any issue. It could be interesting in the future to make it fully standalone and we will definitely consider the option for the newer version we are developing. I hope this answers your question!

koko19 commented 7 years ago

Hi @rdube, thank you very much for the quick reply. I am not sure if I understood you correctly. So we have multiple point clouds. New point cloud is taken from the sensor after robot pose from the last cloud changes significantly. Now I would need to process every taken point cloud and find loop closings with your algorithm.

I am not sure from you answer how would I add multiple point clouds to the algorithm and find loop closings between them. Is this the correct way?

Take pcd_1 processAndSetAsSourceCloud(pcd_1 ...) Take pcd_2 processAndSetAsSourceCloud(pcd_2 ...) Take pcd_3 processAndSetAsSourceCloud(pcd_3 ...) ... Take pcd_n processAndSetAsTargetCloud(pcd_n ) SegMatch::findMatches() SegMatch::filterMatches()

Take pcd_n+1 processAndSetAsSourceCloud(pcd_n+1 ...) Take pcd_n+2 processAndSetAsSourceCloud(pcd_n+2 ...) ...

rdube commented 7 years ago

Hi @koko19, are you working on an offline batch SLAM solution or are you looking for something which can work in real-time? In the latter case I would suggest you to always use the latest point-cloud as a target as you would then only have to build one k-d tree for doing the matching.

If I understand your application correctly a solution could look like the following, assuming that n is the index of the latest point cloud.

processAndSetAsTargetCloud(pcd_n ...)
for i = 1 to n-1
  processAndSetAsSourceCloud(pcd_i ...)
  findMatches()
  filterMatches()
end

Note that if you are looking for real-time performances this would not be the fastest solution as several clouds would have to be processed (segmented and described) multiple times. In our case, we do not have this issue as we transfer the segments obtained when processing a source cloud to a target map. The point clouds are then processed only once and segment retrieval is performed against the whole target map (see the paper for more information on this). Would that be an option for you?

If not, you could adapt the function processAndSetAsTargetCloud() to return the segmented_target_cloud_. And add a simple function setAsSourceCloud() which would give you something like this:

segmented_cloud[n] = processAndSetAsTargetCloud(pcd_n ...)
for i = 1 to n-1
  setAsSourceCloud(segmented_cloud[i] ...)
  findMatches()
  filterMatches()
end

Which would be more efficient (segmenting and describing every cloud only once).

koko19 commented 7 years ago

@rdube Thank you very much for your help, I've managed to successfully integrate it and it works good with KITTI dataset (tested tracks 0 and 5). However I also tried it with our indoor dataset and could not get accurate results. I suspect it is because of the vocabulary that is built on the KITTI dataset "random_forest_eigen_25trees" which is provided for download in the demonstration description. Do you think this is the reason or is something else wrong for the indoor work? If this is the main problem is it possible to built vocabulary on another data with code provided?

koko19 commented 7 years ago

P.S. for every segmented point cloud there is much lower number of valid segments than in the KITTI dataset. This is why I had to lower the number kMinNumberSegmentInTargetCloud which in your code was set to 50 in order to get any results...

rdube commented 7 years ago

@koko19 great! Did you push this code on a fork? I would be happy to have a look and that are definitely functionalities which we should try to include in segmatch. Yes I would expect that you get less segments with the euclidean segmenter. For indoor scenarios you might want to have a look in the smoothness based region growing segmenter defined here: https://github.com/ethz-asl/segmatch/blob/master/segmatch/include/segmatch/segmenters/region_growing_segmenter.hpp that should give you more segments (depending on the environment) which should make place recognition easier.

You can see a demonstration using that segmenter here: https://www.youtube.com/watch?v=JJhEkIA1xSE

If you are interested to look into this I can push a configuration file that we used with this segmenter. We updated the interface to this segmenter in the latest version of segmatch so the configurations we are using right now cannot be used directly with this open source implementation. Again our goal is to open source an improved version of segmatch before the end of the year.

rdube commented 6 years ago

@koko19 fyi @tacoma kindly put some information together in Issue #50 regarding region growing with smoothness constraints.