NeBula-Autonomy / LOCUS

Robust Lidar Odometry System
MIT License
350 stars 48 forks source link

Testing in tunnel #47

Open wpwebrobotics opened 2 years ago

wpwebrobotics commented 2 years ago

Hi, thank you for sharing your work. We are testing Locus in different environments with a drone (currently hand carried) with 1 Velodyne VLP-16 and a Vectornav IMU. Where there are sufficient planes and edges all is well, now we are testing it with a rosbag made in a very monotonous railway tunnel about 50 m long. Arriving about 9 m from the start Locus hangs for some time until the drone approaches the exit. In this way it does not track about 25 meters. Video here: https://www.youtube.com/watch?v=RqD9AQTF9YE

I have tried many combinations of these parameters:

leaf_size: 0.25/0.10/0.20/0.30/0.50

recompute_covariance_local_map: true/false

icp_covariance_method: 1/0

icp_max_covariance: 0.01/0.1

Iterate ICP this many times. iterations: 30/25/20/15/10

Radius for normal search normal_search_radius: 10/20/30

I did not get much difference. Do you know of any other configurations to try in this environment? If this, as with other slams, is a difficult environment does a second point cloud have a good chance of solving the problem? And, since we need the first lidar horizontal for collision avoidance, what orientation do you think would be optimal for a second lidar? See you soon.

BenjaminMorrell commented 2 years ago

Hi @wpwebrobotics - thanks for testing locus!

You have hit the nail on the head with your question - this is one of the more challenging components of lidar SLAM, especially with lower resolution lidars, such as a VLP-16. So I am super excited you are testing in such a challenging environment! A great way to try and tackle these lidar-degenerate cases. That tunnel looks to be especially challenging!

Unfortunately, there is probably a limited amount that you can do by tuning ICP parameters - there is just inherently limited information in that environment. We did have an observability module that would have been showing that your observability in the tunnel is very low. @yunzc might have some modules in LAMP you could use to test the point clouds from there (if you are interested, I can give you more info).

Hence, my advice for these environments are:

  1. Integrate in another source of odometry - e.g. the visual signal you have shows movement, and running some kind of VIO on that would give an odometry that could be fed into LOCUS as the initial guess. This is how we got around these environments in our tests - feeding VIO or Wheel-Inertial-Odometry (WIO) as the initial guess to LOCUS. Of course, you need to have the available compute on your drone. One quicker option is to get a Realsense T265 - that gives you an odometry output.
  2. Try a lidar with higher vertical resolution and FOV (e.g. Ouster OS0 64 or 128) - this is expensive, I know, but should give more information for ICP in these environments
  3. A more affordable option is to use a Realsense L515 (if you can get one) and point it diagonally down to the ground - that might help. In general for a second lidar, pointing down to see the ground can help. People tend to point L515's straight down or up to also be useful for avoiding obstacles during vertical flight.
  4. Tilting your VPL16 to around 30 deg might help, to see the ground and ceiling, but that is tough to do on a drone, and may not help that much (you coud test with a hand-carry test keeping the drone tilted at 30 deg).

It is especially a challenge on drones because of the limited power and weight, but hopefully the thoughts above can help?

Happy to continue the discussion here.

BenjaminMorrell commented 2 years ago

@femust - do we still have our observability computations in the code?

femust commented 2 years ago

Yes we have and it may be use for this purpose, though probably we shouldn't do that after update of odometry step hehe but it is to be improved yet :)

Cristian-wp commented 2 years ago

Hello, I have a L515 and I wana use it as second odometry source like you suggest. How can I do this? Realsense package offers an example of tracking, but is not realible...can you please suggest a way to do this?

femust commented 2 years ago

@Cristian-wp if L515 provides for example visual odometry you just need to change the ODOMETRY_TOPIC in locus.launch + in lo_setings.yaml keep data_integration on mode 3

femust commented 2 years ago

@wpwebrobotics can we close this issue or some additional comments are needed?

Cristian-wp commented 2 years ago

I have check online and even ask to Intel, the L515 can not provide an odometry topic, you have to use an external package to generate it. Maybe can you suggest a robust package to perform visual odometry? Is better to generate this odometry using the camera internal IMU (low grade) or maybe use an external high grade IMU?

femust commented 2 years ago

it doesn't have to be odometry directly from visual sensor, it can't be wheeled as well (or legged), you can use imu as well in lo_setings.yaml keep data_integration on mode 1 or 2 (or 0 if you want to do just purely lidar odometry without preintegration)

Cristian-wp commented 2 years ago

@femust thanks for the fast reply :) I wana use as input the visual inertial odometry, with the standard lidar intertial slam performed by locus. As you say "it doesn't have to be odometry directly from visual sensor", so I wana fuse the camera imu with the camera images, can you please suggest a ROBUST ros package to perform this operation? @BenjaminMorrell

BenjaminMorrell commented 2 years ago

Hi @Cristian-wp - thanks for your questions above.

To clarify things L515 - this is another lidar sensor - you could fuse this as another point cloud input to your VLP16 to get a richer point cloud for LOCUS - there are implementation details there (timing, alignment and filtering), but that is an option, e.g. to give more vertical visibility.

T265 - this is a visual SLAM sensor from Intel https://www.intelrealsense.com/tracking-camera-t265/ - and there are other options out there now from smaller companies - this would give the odometry output that @femust was describing above. Other than calibrating frames, this is the easiest to set-up and run.

If you want to run your own Visual-Inertial-Odometry (VIO), I would look at some of the following open source packages: KimeraVIO - https://github.com/MIT-SPARK/Kimera-VIO ORBSLAM3 - https://github.com/UZ-SLAMLab/ORB_SLAM3 VinsMono - https://github.com/HKUST-Aerial-Robotics/VINS-Mono (plus many other strong options)

If you go with these options there are a number of implementation steps that are important to get the desired accuracy (good IMU to camera mounts that are rigid, noisy impression if you are on a drone, good calibration between camera and IMU). Using a camera that comes with an IMU pre-calibrated can make things quicker. Otherwise, it is a good exercise to set things up, just give it time to do it right to get decent results.

wpwebrobotics commented 2 years ago

Hi everyone, I was away on vacation but my colleague Cristian kept this very interesting discussion open. To summarize, to attempt navigation in our environment I would exclude in the beginning a pure lidar solution with only one lidar, so there are, in my opinion, these solutions that should be evaluated depending on the degree of complexity, weight and consumption, taking into account that a pointcloud forward is necessary for avoidance: 1) Pure lidar: two lidars with larger vertical amplitude than the VLP-16, and tilted at different angles. One can try for weight containment the L515 as a second lidar. 2) A Lidar + external odometry: give Locus a source of odometry derived from sensors other than the Lidar (eg. VI-SLAM). So one package among those you mention: VINS we know, we know ORB-SLAM2 and can evaluate ORB-SLAM3, Kimera we can try. We have also used ROVIO in the past with good robustness results. 3) Point 2 with 2 lidars.

We hope to do these tests soon. Giorgio.

BenjaminMorrell commented 2 years ago

Good luck with this! I think that is a good summary. Yes, ROVIO is a great option as well.

Another thought - depending on the specifics of your application - is to use a UWB ranging beacon. It is light-weight , and low-power, and if you are just moving down a tunnel, it can provide enough constraints to combat the "lidar slip" as well call the scenario you are seeing in the tunnel. This isn't plug-and-play in LOCUS though - and would need another fusion step after LOCUS (e.g. an addition to LAMP, or another approach).

wpwebrobotics commented 1 year ago

Thank you and likewise.

Cristian-wp commented 1 year ago

@femust Thank you for your help, but can you reopen the issue? Last week I have try to run locus with vins_mono as odometry input, but the performance does not change, Locus fail always at the same point. In rqt_graph I can see that the topic are correctly passed, and I have verify that are not empty. My setup is:

data_integration:
  mode: 3

<param name="b_integrate_interpolated_odom" value="true"/> # 10Hz odom frequency
 <param name="b_pub_odom_on_timer" value="false"/>
 <param name="compute_icp_covariance" value="false"/>

Can you suggest me a way to fix the problem? Maybe I have to increase the "weight" of the odometry during data fusion, how can I do it? There is a way to check that the odometry topic is fuse correctly?(I do not have warning or error in the terminal) The odometry topic is correct respect to the ground truth (distance hand measured in the test field)

femust commented 1 year ago

If the tunnel is very very long (as in your case) the locus may not act nicely and it's a widely known issue with lidar odometry system and ICP, even with preintegration from odometry sources (probably you would spend a lot of time for tunning params to make it work which does not make sense I think if you want to do something generic).

What I would suggest is to turn on flag compute_icp_observability: true in point_cloud_localization/parameters.yaml and write in PointCloudLocalization.cc some logic-based method with a threshold that switches to a pure-visual-inertial-based odometry system (vins) -> we have for this flag is_healthy and the whole diagnostic_msgs::DiagnosticStatus infrastructure which is not currently used frankly speaking.

Cristian-wp commented 1 year ago

Hi @BenjaminMorrell, about your offer to test our pointcloud with the LAMP modules "@yunzc might have some modules in LAMP you could use to test the point clouds from there (if you are interested, I can give you more info)." @yunzc Can you please explain how to do it?

Cristian-wp commented 1 year ago

@femust thanks I will try this way with the velodyne. Do you have an example or maybe a paper that explain how to switch the slam method in a safe way? Maybe I am wrong, but is not dangerous to switch the slam method in air?

Now we have even a Ouster so we try to repeat the first suggested tests with this new sensor.

femust commented 1 year ago

locus is an odometry system so it works on the incremental changes. So you can just detect the slip of lidar odometry and then from start using incremental measurements from vio and then if you want use SLAM system, just feed this to LAMP

BenjaminMorrell commented 1 year ago

@Cristian-wp - have you looked at how well Vins-Mono runs? By itself is it tracking the motion correctly in the areas where raw LOCUS is not?

That will be helpful to get an idea of how much we can expect the VIO to assist.

From what @femust is saying just above - that would be the way to get an observability parameter (I believe that is the same method @yunzc has implemented). If you read this metric, I would test if it gives you a good signal when you are in the challenging parts of the tunnel (i.e. tells you that your observability is low).

You could switch the input to LAMP based on this signal (e.g. switch from LOCUS to VIO) - however you have to be careful when doing this switching not to introduce incorrect jumps. See this paper for details on how we have done this in the past: https://arxiv.org/pdf/2008.09679.pdf

BenjaminMorrell commented 1 year ago

Basically, you need to make sure you are always using only incremental data. There is a version of this logic in LOCUS for the data integration part.

Cristian-wp commented 1 year ago

@BenjaminMorrell VINS runs well in our datasets were the LOCUS fail. @femust thanks a lot for your reply, I will try in these day to implement your solutions. I let you know my progress and problems :)