Closed jobesu14 closed 2 years ago
If im not wrong, is written in the paper. R3 should support mechanical lidars as FAST-LIO (1 and 2) and R2live does . Im located in Wallis so if you want to see some datasets you can reach me ;) . I would just not recommend you to test R2live as is a proof of concept and in my case tend to fail more often than FAST-LIO2, as is based in FAST-LIO1
I am not able to find that info in the paper ๐ But yeah, I agree that because the LIO part of R3LIVE is based on FAST-LIO2, spinning LIDARs should work. The points outside of the camera frustom should just be unactivated for the VIO part of the algorithmm, I guess... (your also in Switzerland? small world! ๐)
We make a limited evaluation on testing our algorithm on spinning LiDAR, it requires us to make a new setup on the hardware devices, which is always time-consuming. On theoretically, our proposed algorithm is capable of spinning LiDAR, with kindly modifying the input LiDAR measurements. Lastly, if possible, I will give an example on running R3LIVE with a spinning LiDAR (Ouster-2 64 line).
Thanks for your answer. Happy to help/contribute on making the pipeline run with spinning LIDAR if needed (I already have a hardware setup). You can contact me directly if interested.
@jobesu14 I am working on improving FAST-LIO2 for spinning LiDARs since there are a couple of fundamental changes that can be exploited.
The algorithm can localize the vehicle and create a map at 20m/s built on a self-driving car with tight curves. When the R3Live code comes out definitively I want to adapt it too! Have you done any recent advances (since this is from November)? I will put a pre-release of the code hopefully in a couple of weeks.
Hello @Huguet57, great to hear that work is done to improve the FAST-LIO2 library specifically for spinning LiDARs. I would be indeed super intersted to test out your improvements! I saw that you are a visiting resercher at HKUST, does it mean you're working on the HK-Mars-Lab?
On my side, I worked on a fork of FAST-LIO2 where I just refactored a bit the code to remove the depency over livox-driver and I also fixed/added some code to properly save the cumulated pointcloud in .pcd. I am using FAST-LIO2 mostly in indoor environments and I have to say that it is working incredibly well. The only case the SLAM diverges is on long narrow tunnels when the geometrical features are poor and the environment is symetric, which is not surprising. Integrating the vision to the SLAM algo (-> R3Live) would probably improve that significantly.
My setup is simply a Ouster 32 beams LiDAR (with an interated IMU) and a rigidly mounted global shutter BW camera. The LiDAR, IMU and camera frames are all time synchronized. Basically I have the required hardware up and running to try out R3Live with spinning LiDAR easily as soon as it is available. So when the code comes out, if some adaptation is needed, I would indeed be super interested to follow your work, test it and give feedback. Even contribute if it makes sense.
I gave you a follow to make sure not to miss your work๐
Great! My code until now has removed the Livox driver dependency (simply because there's Fast-LIO2 for that) but saving the accumulated .pcd is a really interesting way of research, specially compressing techniques to save/load it efficiently. Great to hear you have done advances on that.
What you said about long tunnels is really interesting. I'm also very interested on degenerate scenarios and modelling them. I have experimented analyzing the *eigenvectors of the H^tH** matrix (being H the jacobian of the observations function "h" of the SLAM problem). Only updating the axis that are non-degenerate with LiDAR observations and leaving the rest to IMUs is an interesting idea I want to pursue too.
And HKUST != HKU! :smile: But what I'm told they are great friends. I'm here as Visiting Intern, I'm originally from Universitat Politรจcnica de Catalunya @ Barcelona and there we have our car "Xaloc":
:rocket: Xaloc (CAT13d) official video (YouTube)
We use Velodyne so having a open-source implementation to Ouster and HESAI would be really interesting. And definitely adapting R3Live is even more interesting for its generalization nature.
The purpose of my code is to rewrite the front-end of Fast-LIO2 to a modular framework easier to modify and build upon but still use the IKFoM and ikd-Tree libraries since they are proven to work.
The support of spinning LiDAR will be added in the future. And, for the sake of convenient, I temporally close this issue with no update recently, you can re-open it at any time you wanted.
Hey @jobesu14 I just released LIMO-Velo in alpha stage! Not tried Ouster yet but it's supported
Hi @Huguet57 and @ziv-lin,
By using Fast-Lio handler, I wrote a Velodyne handler for R3Live. It works perfectly with LiDAR+IMU.
However, as I explained with detail in this issue: https://github.com/hku-mars/r3live/issues/157 when I add a camera, a significant drift happens. It would be great if you look at it.
Great paper and videos ๐ Can't wait to try out the pipeline. Especially to build textured meshes, loving it!
I was wondering if R3LIVE will have support for system with camera, IMU and spinning LIDARs (Ouster, Velodyne). I didn't find any clue in the paper about that...
The only clue I found on this R2LIVE thread where the author was saing that spinning LIDARs will be supported in the next iteration of the pipeline, which I asume, is R3Live...
So, my question are:
Once again, thank you so much for the contribution, great job ๐