York-SDCNLab / IILFM

This is a fiducial marker system designed for LiDAR sensors. Different visual fiducial marker systems (Apriltag, ArUco, CCTag, etc.) can be easily embedded. The usage is as convenient as that of the visual fiducial marker. The system shows potential in SLAM, multi-sensor calibration, augmented reality, and so on.
61 stars 4 forks source link
apriltag aruco aruco-marker augmented-reality cctag fiducial-markers lidar lidar-calibration lidar-camera-calibration lidartag point-cloud robotics ros slam

Intensity Image-based LiDAR Fiducial Marker System

2489_graph

This work has been accepted by the IEEE Robotics and Automation Letters.

YouTube link to the introduction video: https://www.youtube.com/watch?v=AYBQHAEWBLM.
Bilibili link to the introduction video: https://www.bilibili.com/video/BV1s34y147UM/.

:tada::tada::tada:News!
:mega: The first mapping and localization framework based on LiDAR fiducial markers has been released here! Check out the instance reconstruction results below. The top row displays the ground truth on the left and ours on the right. The bottom row shows Livox Mapping on the left and LOAM Livox on the right.


:mega: Our new work Fiducial Tag Localization on a 3D LiDAR Prior Map has been released!

Background

Extensive research has been carried out on the Visual Fiducial Marker (VFM) systems. However, no single study utilizes these systems to their fullest potential in LiDAR applications. In this work, we develop an Intensity Image-based LiDAR Fiducial Marker (IILFM) system that fills the above-mentioned gap. The proposed system only requires an unstructured point cloud with intensity as the input and it outputs the detected markers' information and the 6-DOF pose that describes the transmission from the world coordinate system to the LiDAR coordinate system. The use of the IIFLM system is as convenient as the conventional VFM systems with no restrictions on marker placement and shape. Different VFM systems, such as Apriltag 3, ArUco, and CCTag, can be easily embedded into the system. Hence, the proposed system inherits the functionality of the VFM systems, such as the coding and decoding methods.

Marker Detection Demos

One and Two markers detection: demo1 Apriltag grid (35 markers) detection:

demo2 demo3

LiDAR Pose Estimation Demo

demo4

Other Applications

The proposed system shows potential in augmented reality, SLAM, multisensor calbartion, etc. Here, an augumented reality demo using the proposed system is presented. The teapot point cloud is transmitted to the location of the marker in the LiDAR point cloud based on the pose provided by the IILFM system.
demo5
In this repository, we only release the version of which the embedded system is Apriltag 3. The versions with ArUco, CCTag detectors are coming soon. It is a very straightforward process to replace the embedded visual fiducial marker system. Hence, following the method introduced in our scripts, you may add any visual marker detector as you like.

Requirements

Commands

git clone https://github.com/York-SDCNLab/IILFM.git
cd IILFM
catkin build

Modify the 'yorktag.launch' in ~/IILFM/src/yorkapriltag/launch according to your LiDAR model (e.g. rostopic, angular resolutions, and so on) and the employed tag family. Then modify the 'config.yaml' in ~/IILFM/src/yorkapriltag/resources based on your setup (define the locations of the vertices with respect to the world coordinate system). Otherwise, the outputted pose is meaningless. Afterward, run
source ./devel/setup.bash
roslaunch yorkapriltag yorktag.launch
Open a new terminal in ~/IILFM/src/yorkapriltag/resources and run
rosbag play -l bagname.bag

To view the 6-DOF pose, open a new terminal and run
rostopic echo /iilfm/pose

To view the point could of the detected 3D fiducials in rviz, open a new terminal and run rviz. In rviz, change the 'Fixed Frame' to 'livox_frame'. add/ By topic/ iilfm/ features/ PointCloud2

Experimental result:

Due to the page limitation, we removed this huge table from our manuscript submitted to RA-L and replaced it with a histogram. Considering that some readers might be interested in the ground truth, we present the table here. Please refer to our paper to see the detailed experimental setup. table1

Citation

If you find this work helpful for your research, please cite our paper:

@ARTICLE{9774900,
  author={Liu, Yibo and Schofield, Hunter and Shan, Jinjun},
  journal={IEEE Robotics and Automation Letters}, 
  title={Intensity Image-Based LiDAR Fiducial Marker System}, 
  year={2022},
  volume={7},
  number={3},
  pages={6542-6549},
  doi={10.1109/LRA.2022.3174971}}