BEAMRobotics / beam_slam

Tightly coupled lidar-visual-inertial slam using the fuse framework
81 stars 4 forks source link

Implement Camera-Lidar sensor model #21

Open nickcharron opened 3 years ago

nickcharron commented 3 years ago

We want to add constraints between camera keypoints and lidar maps.

We can most likely use the singleton LidarMap class in beam_slam_common, but we will likely need a way to pass the keypoints from the VIO to this sensor model. We can simply making a topic that the VIO publishes to, then this sensor model will just subscribe to that topic and also get an instance of the lidar map. We could investigate other ways to do this without having to publish the keypoints, (such as using the graph) but it also likely won't be much overhead and the lidar maps lag behind the vision anyways so it's not bad if there's a bit of a delay.

This was my idea for doing the correspondence search:

jakemclaughlin6 commented 11 months ago
  1. Subscribe to visual measurements
  2. For each visual measurement, try to get the pose from the graph for it (wait until this can be achieved)
  3. Once it the pose can be retrieved, transform the current lidar map into frame
  4. Add point to plane constraints between visual landmarks and lidar cloud a. project lidar into image, for each landmark get nearest 3 points, compute plane