ethz-asl / segmap

A map representation based on 3D segments
BSD 3-Clause "New" or "Revised" License
1.06k stars 394 forks source link

Does SegMatch works with Velodyne VLP16? #69

Closed junzhang2016 closed 6 years ago

junzhang2016 commented 6 years ago

Hi, I have read the paper for several times in details, and also tried the 2 demos, it works very well. But may I ask SegMatch works well with Velodyne VLP16?

I just started to try to use SegMatch with VLP 16, if someone came across the same issue, could you tell us the steps to use SegMatch with our own rosbag. Thanks.

Meanwhile, I will put the steps here if I successfully do it.

rdube commented 6 years ago

Hi @junzhang2016 thanks for your interest in SegMatch! Yes we got nice results on our NiFTI robot equipped with a VLP16 (see pic below).

The experiment in section IV-F of the segmatch paper is actually using VLP16 data and the curvature-based segmentation algorithm. That worked nicely for us! Did you already ran into specific issues?

Looking forward to hear more!

nifti_velodyne

rdube commented 6 years ago

You might find this thread relevant regarding the curvature-based segmentation #50

junzhang2016 commented 6 years ago

Hi, @rdube thank you for your reply and nice work. I also have successfully run SegMatch with our VLP16, it works and needs more parameter tuning to improve the performance. Currently, I got the result as shown below: image image

May I ask how do you tune the parameters for your experiment with VLP16? Which parameters do you modify? Thanks.

(Next, I will show how do I use SegMatch with our dataset collected with VLP16)

junzhang2016 commented 6 years ago

Here, I will show how do I use SegMatch with our dataset collected with VLP16.

  1. Follow the Demonstrations. You may use some download software to download the dataset (>10GB) quickly.

  2. Duplicate the files under /segmatch/laser_mapper/launch/kitti and rename with _vlp16 as shown: screenshot from 2018-02-05 09 38 52

  3. Modify vlp16_loop_closure.launch:

    • Bag file path: your rosbag file
    • ROS parameters: your vlp16_loop_closure.yaml
    • laser_mapper node: value="$(find laser_mapper)/launch/kitti/icp_dynamic_outdoor_vlp16.yaml" value="$(find laser_mapper)/launch/kitti/input_filters_outdoor_vlp16.yaml"
  4. Modify vlp16_loop_closure.yaml:

    • assembled_cloud_sub_topic: "/velodyne_points": your point cloud topic
    • segmentation_radius_m: 30.0, # 60 > 30: I thought it's the radius of the cylinder, and I changed from 60 to 30 meters for our smaller environment.
    • ec_min_cluster_size: 100 #200 > 100: I changed from 200 to 100, so there will be more clusters
    • threshold_to_accept_match: 0.3, #0.65->0.3: so there will be more match
  5. Modify input_filters_outdoor_vlp16.yaml:

    • BoundingBoxDataPointsFilter: zMin: -0.55 # -1.2 > -0.55 Because our velodyne is mounted 0.7 meters above the ground. I set it to -0.55 to reliably remove ground plane.

That's all for loop_closure.

For localization, I need to save the point cloud output from above, the topic is /segmatch/target_representation. In the loop_closure step, use command to save to point cloud: $ rosrun pcl_ros pointcloud_to_pcd input:=/segmatch/target_representation _prefix:=target_

Then, in vlp16_localization.launch:

rdube commented 6 years ago

@junzhang2016 that looks awesome! Thanks for sharing these results and the instructions! Are you planning on providing the bag file that you used?

It looks like cars are often used for localization. Is that an issue for you? We experienced the same with kitti as cars results in very distinctive and separated segments. In our future work we will show that it is also possible to localize without using segments of cars.

There are several parameters which could be adjusted and experimented with. I would probably start by considering the curvature-based segmentation (as you have seen from the other post) and see how many segments you get.

How long is your dataset? You might consider to lower min_time_between_segment_for_matches_s if you feel like a part of the environment is revisited within that duration.

Thanks for keeping us updated! :)

junzhang2016 commented 6 years ago

@rdube Thank you :). I would like to share our dataset, but it's a little big (10GB), so I am still figuring out how to share the bag file, do you have some suggestions?

Our bag file lasts for 15:12s (912s).

DriftingTimmy commented 6 years ago

Amazing output in VLP16, it seems better than I had imagined, the segment is pure and can be easily classified by us.

rdube commented 6 years ago

@junzhang2016 I will check and come back to you if we can propose a solution for storing your dataset.

@DriftingTimmy this sounds interesting! Can you elaborate on that classification idea?

DriftingTimmy commented 6 years ago

segshow

segmatch_data_total Well I just tried the algorithm with another kind of 16 beams lidar following the instructions of @junzhang2016 , here is the result. segmatch_details

The output of segment is great which I want to find out the way to divide the segment code. And I found some mismatches in my own data, I don't know why it happens because I don't think these matches should be detected. The matches will be showed in the following: screenshot from 2018-02-08 09 22 25 It was my second time running the segmatch with the same data, but the mismatches make the map disordered. I will check out the params and try to fix it and then figuring out the reason in the code.

Thanks the instruction of @junzhang2016 again~

rdube commented 6 years ago

@DriftingTimmy it looks like there might be a lot of aliasing due to the repetitive tree pattern. That can be challenging for segmatch. Have you tried to increase the minimum number of matches which are geometrically consistent to ouput a localization: https://github.com/ethz-asl/segmatch/blob/master/laser_mapper/launch/kitti/kitti_localization.yaml#L99

Another option would be to add a post-processing step to prevent localization on matches which are geometrically symmetric. There could also be another data-driven option to avoid using segments which are not useful for localization. I would be curious to give a try to your problem. Would it be easy for you to share this dataset?

DriftingTimmy commented 6 years ago

@rdube Sorry to reply u so late cuz I was in my spring festival. I will upload my data later and give u a download link. There is one point I cannot figure out that how u localize the position of the car and how u match the clusters between different two frames? If all clusters have their own position, I think it's hard to mismatch and the repetitive part of trees will not influence that much. Thanks for replying.

And here is my own data : https://drive.google.com/open?id=1ny86j1_OhfM9k6vOn18miqPJ6tZWaefY Hoping for your reply~

rdube commented 6 years ago

@DriftingTimmy Regarding the "position of the car" we use the centroid of the segments as a basis for estimating the loop closure transformations using geometric verification. Regarding "how u match the clusters between different two frames" we are using k-NN retrieval. You can find more information about this in our paper. I still believe that aliasing can be challenging in that scenario: sets of trees form very identical patterns all over the environment. Hope that helps!

rdube commented 6 years ago

@DriftingTimmy I gave a quick try to your dataset using the latest segmatch implementation (which will be publicly available in April). It worked out of the box, using the exact same parameters that we use for the kitti dataset. As you can see from the picture (white bars), multiple loops were closed all around the trajectory and no false positive localization were detected.

loop_driftingtimmy

In case you want to reproduce here are the parameters which were used (same as KITTI in the latest version). It is very likely that it would also work with a simpler configuration (eg. knn retrieval without having an additional verification on the features' distance). fyi the parameters have been updated in our latest version so you might have to make associations (or wait for the latest version in April).

SegMatch: {
    filter_duplicate_segments: true,
    centroid_distance_threshold_m: 2.5,
    min_time_between_segment_for_matches_s: 60,
    check_pose_lies_below_segments: false,

    LocalMap: {
      voxel_size_m: 0.10,
      min_points_per_voxel: 1,
      radius_m: 60,
      min_vertical_distance_m: -999.0,
      max_vertical_distance_m: 999.0,
      neighbors_provider_type: "KdTree",
    },

    Segmenters: {      
      segmenter_type: "IncrementalEuclideanDistance", # IncrementalSmoothnessConstraints, SimpleSmoothnessConstraints
      min_cluster_size: 100,
      max_cluster_size: 15000,
      radius_for_growing: 0.2
    },

    Descriptors: {
      descriptor_types: ["EigenvalueBased"], # "EnsembleShapeFunctions"
    },

    Classifier: {
     threshold_to_accept_match: 0.60,
     n_nearest_neighbours: 150,
     knn_feature_dim: 7,
     enable_two_stage_retrieval: true,
     apply_hard_threshold_on_feature_distance: true,
     feature_distance_threshold: 0.011,

     normalize_eigen_for_knn: false,
     normalize_eigen_for_hard_threshold: true,
     max_eigen_features_values: [2493.5, 186681.0, 188389.0, 0.3304, 188388.0, 1.0899, 0.9987]
    },

    GeometricConsistency: {
      recognizer_type: "Incremental", #"Simple", "Partitioned"
      resolution: 0.4,
      min_cluster_size: 6,
      max_consistency_distance_for_caching: 3.0,
    }
  },
  },

Hope that helps! :)

DriftingTimmy commented 6 years ago

Thx for your reply. The result for the coming new version is surprising. I will check the params and wait for the new version~ Thx again @rdube

DriftingTimmy commented 6 years ago

@rdube Sorry to interrupt u again~ But I have some questions about the code. Actually, I m interested in the segment part and I could not find the implementation of the growing segment part. I know that we use the euclidean clustering method to segment one frame. And I think there should be a part to integrate several closed frames to get the source_representation message, which shows a great output of the segments of the current pose. So can u tell me where the implementation of this part in your code? That will help me a lot~ Thx and hope for your reply~

rdube commented 6 years ago

Hi @DriftingTimmy, the accumulation of each scan into a point cloud happens here: https://github.com/ethz-asl/laser_slam/blob/master/laser_slam_ros/src/laser_slam_worker.cpp#L235

The voxelization and cylindrical filtering happens here: https://github.com/ethz-asl/laser_slam/blob/master/laser_slam_ros/src/laser_slam_worker.cpp#L387 (this is messy and was cleaned up in our new version)

Finally, the implementation of the point cloud segmentation is here: https://github.com/ethz-asl/segmatch/blob/master/segmatch/src/segmenters/euclidean_segmenter.cpp#L22

Hope that helps!

DriftingTimmy commented 6 years ago

@rdube Thx so much. Actually, I have read these parts and now I m trying to separate the segmenting and matching part of the segmatch. Because the algorithm is depended on laser_mapper and GTSAM, I want to use G2O instead and make the code lighter combing with LOAM. Do u have any suggestions for this kind of thought?

rdube commented 6 years ago

@DriftingTimmy Yes please see #72

DriftingTimmy commented 6 years ago

@rdube @junzhang2016 My professor asked me to find a method to match different frames by matching the planes only in different environments like the main road in the town. But I think that's difficult and hard to accomplish, do u have any good ideas or references? Or maybe u have the same perspective that this is not useful.

rdube commented 6 years ago

@DriftingTimmy you can give a try to the smoothness based region growing algorithm. See #50

DriftingTimmy commented 6 years ago

@rdube Yep,that really help~ The plane features are segmented better than Euclidean segment method. And I just see the newest research of your team. And the result and performance are amazing. But I want to know how can I get the newest paper for your research? I can not find it on google or IEEE. SegMap: 3D Segment Mapping using Data-Driven Descriptors, hoping to see this paper asap.

rdube commented 6 years ago

@DriftingTimmy the "best" segmentation strongly depends on the type of environment. There are several segmentation algorithm with which we did not yet experiment. Thanks for your interest in our newest paper. We have uploaded it to arXiv and it should be available by Wednesday.

kentsommer commented 6 years ago

@rdube

Looking forward to reading the paper, the results video on YouTube is very impressive. Curious if there is a target timeline for the open source release of the newest paper?

rdube commented 6 years ago

@kentsommer thank you! @smauq @danieldugas what do you guys think? Can we make it happen by mid June?

Guys, we are diverging from the original issue topic. Please open a new issue if not related.

danieldugas commented 6 years ago

@rdube are you familiar with Hofstadter's law 1? With that in mind, if things go smoothly I think we can.

I think we can close this issue, the question has been answered in my opinion.

rdube commented 6 years ago

We'll close this issue then! @junzhang2016 @DriftingTimmy thanks for your input and feel free to open another issue if you have a specific question.

kentsommer commented 6 years ago

@rdube

Just tried running on @DriftingTimmy's dataset and was wondering if you used the provided odom in his bag for the LaserSlamWorker and which frames you had it tracking (I would have guessed rs_odom for odom_frame and rslidar for sensor_frame, however, this doesn't appear to be working). Did you add a static transform?

Edit: I have not spent any time trying to debug yet as I figured you might still have a config lying around that works. If not, no worries.

rdube commented 6 years ago

@kentsommer I've used rs_odom for the odom frame and base_link2 for the sensor_frame. That was sufficient to close the loops as you see in my previous post. To be more accurate we should make sure that the LiDAR scans are correctly transformed in the base_link2 frame. That would be a question for @DriftingTimmy. Hope that helps!

lucaixia1992 commented 6 years ago

@junzhang2016 If you care about providing your email, I would like to consult with you on this issue. Thanks.

junzhang2016 commented 6 years ago

Hi, @lucaixia1992 I am glad to share my email zjruyihust@gmail.com . We can talk with @DriftingTimmy together, he is already quite familiar with SegMatch. If we couldn't solve the problem, we can turn to the author @rdube for help.

lucaixia1992 commented 6 years ago

OK,Thanks.

rdube commented 6 years ago

@junzhang2016 @lucaixia1992 feel free to include me in the email thread (my email is in the readme). Thanks guys!

junzhang2016 commented 6 years ago

@rdube Thank you for your warm help :)

DriftingTimmy commented 6 years ago

@junzhang2016 @lucaixia1992 So sorry that I missed all your messages these days cause I was working on testing the cartographer algorithm on my own vehicle and my hardware. If u guys need some help, I m glad to help u as much as I can~

@kentsommer My config frame of my bagfile is like this: odom_frame: "rs_odom", sensor_frame: "base_link",

junzhang2016 commented 6 years ago

@DriftingTimmy All right, thanks, man. :)

wangyuwei2015 commented 6 years ago

@junzhang2016 Would you please share your vlp-16 launch file and yaml file? I am having the same issue here but from the picture you post I can't get any information from there.

brunoeducsantos commented 5 years ago

Any one could use the data provide by this repo in kitti loop closure?

Lachiven commented 4 years ago

@junzhang2016 hi,I am a newer to segmatch and also want use run SegMatch with our VLP16.And there is some question with my work may I ask how does the tf between VLP16 and world got.And whether you build the map with segmap and how did you do that.Can we discuss it on qq? My qq number is 1353632881.Thanks alot!! @junzhang2016

wangwenbin1991 commented 4 years ago

@junzhang2016 When I test my 16 line lidar data, I can't get good result. Would you please share your yaml file? My email is 15210617541@163.com, thanks very much!