zhh2005757 / FAST-LIO-Multi-Sensor-Fusion

Fusing GNSS and wheel measurements based on FAST-LIO and IKFOM
97 stars 14 forks source link

Could wheel rotation information help in the field? #9

Closed Userpc1010 closed 1 month ago

Userpc1010 commented 1 month ago

I did a short test of fast-lio2 in the field and it led to a lot of drift, can information about wheel encoder reduce it? I’m also wondering if it’s possible to replace the wheel sensors with an optical flow sensor for greater odometry accuracy?

https://youtu.be/PP92PdQygJ0?si=RpUTI5EgmR24Mzz9

Userpc1010 commented 1 month ago

I found this test with the ADNS 3080 camera where the self-driving car is quite good on the asphalt relying only on textures and I think this could be a great combination with lidar in place of wheel encoders or on vehicles where it difficult install. But it seems that most software solutions do not provide such sensors.

zhh2005757 commented 1 month ago

Thanks for your focusing. As for your questions, these are my personal opinions:

  1. There are no more vertical features except for the wide ground in your video. This is a typical degraded environment for LiDAR scan-matching. Yes it is absolulety useful with wheel encoder cause it will help boundary the IMU drift when the LiDAR is degraded as well as the whole odometry system drift.
  2. Therotically it will help improve odometry accuracy with multi-sensors not limited to the wheel encoders, especially in LiDAR degraded environments.
  3. Visual sensor is a good choice to complement the LiDAR itself. I think there are a lot of papers about visual and LiDAR sensor fusion such as LVI-SAM, FAST-LIVO, R2LIVE and so on.
Userpc1010 commented 1 month ago

But I would like to draw special attention to ADNS3080 because it is cheap, can track displacement along XY axes without using external calculations (DSP core), shoot at 6400 frames per second, which allows driving at high speed, ordinary cameras can not do so. Thus, knowing the height of the ADNS3080 camera above the ground and also the displacement in pixels received from the sensor along XY axes, I can get ready speeds. This is exactly what this sensor returns, speed. I believe that the wheel encoders ultimately also calculate the speed of the all-terrain vehicle, but are subject to slippage, drift at the moment of turning/sliding, this visual sensor does not slip. And that's why I'm wondering if there is some kind of interface in the program where instead of the wheel odometry one can provide input a ready-made speed with time stamps on 2 or 3 axes (in the latter case I'll take the vertical speed from the barometer) to reduce the IMU drift on all 3 axes.

I have not yet found programs that combine data from such visual sensors, as you can see in the picture the camera lens looks at the ground and shoots the textures of the ground, it is covered with a visor from shadows because it uses a dense optical flow, and not keypoint detectors. Thanks to this, it can find the slightest differences in textures. I also equipped it with an IR panel so that it works at night.

20240418_181018 20240418_181649 20240418_181043

Userpc1010 commented 1 month ago

Thanks for your reply, I just wanted to clarify this nuance, perhaps it will give an advantage in terms of ease of integration of visual sensors.