GREAT-WHU / RoadLib

A lightweight library for instance-level visual road marking extraction, parameterization, mapping, etc.
GNU General Public License v3.0
215 stars 44 forks source link

RoadLib

A lightweight library for instance-level visual road marking extraction, parameterization, mapping, map-aided localization, etc.

What is this?

This is a enhanced version of our work "Visual Mapping and Localization System Based on Compact Instance-Level Road Markings With Spatial Uncertainty" (RA-L 2022).



I have made practical modifications to the original version and hope this can serve as a reference for related research. A video preview is available here .

What is new? (compared to the original paper)

Update log

Installation

mkdir build
cd build
cmake ..
make -j8

Inference model (road marking segmentation)

We provide an pretrained pytorch model for road marking segmentation. The model is based on the Segformer implementation of MMSegmentation. We use the apolloscape dataset and our self-made dataset (around 500 images) collected in Wuhan City to train the model, which works fine in the road environments of Wuhan.

To test the model, MMSegmentation is needed. After the installation, put segformer_whu.py to the "configs/segformer" folder of the MMSegmentation project.

See

python scripts/inference_example.py

for details of the inference.

Notice that this is just a toy model due to the limited training set. You may train your own road marking segmentation model to fit your applications.

Run the example

1. Mapping example

Download the test dataset we collected in Wuhan City here.

To run the mapping example, follow the command below

./build/demo_mapping ./config/WHU_0412/vi.yaml ${DATASET}/stamp.txt ${DATASET}/cam0 ${DATASET}/semantic ${DATASET}/gt.txt ${DATASET}/odo.txt ./map_output.bin

This demo would perform incremental mapping and geo-registering sequentially. The main function (demo_mapping.cpp) is written in a simple script-like manner, feel free to modify it.

The generated file would be saved to a binary file. Use "scripts/view_map.py" for visualization.

2. Map-aided localization example

We provide a simple example for map-aided localization based on the pre-built map. Notice that the functionality of coarse matching (or re-localization) is currently not provided. A meter-level initial guess of the vehicle pose is needed for the initial map matching, after which global pose measurements are not necessary.

In this example, we use the same data sequence of the mapping phase for map-aided localization, as a simple functionality test. To run the localization example, follow the command below

./build/demo_localization ./config/WHU_0412/vi_loc.yaml ${DATASET}/stamp.txt ${DATASET}/cam0 ${DATASET}/semantic ${DATASET}/gt.txt ${DATASET}/odo.txt ./map_output.bin ./localization_result.txt

Notice that the map file "map_output.bin" needs to be pre-built (see the mapping example). The ground-truth file is needed to provide the initial guess (prior pose estimation of the first epoch).

The generated localization result would be saved to a text file. Use "scripts/evaluate_localization.py" to evaluate the accuracy.

Run on your own dataset

To run on your own dataset, the following data/metadata need to be prepared.



In all the tests, we assume the body frame to be right-forward-up and the camera frame to be right-down-forward. Modifications might be needed if you are using a different setup.

About the viewer perfomrance

The handcrafted legacy OpenGL viewer works fine on my Windows, but the performance is very poor on my WSL2. If you have any ideas or solutions, please contact me.

Limitations

The obvious limitation of the project is that it only focuses on the road markings. We hope to support other roadside object instances (like poles, signs) in the future.

The code project still has a lot of room for improvement. Feel free to discuss it with me!

Acknowledgement

RoadLib is developed by GREAT (GNSS+ REsearch, Application and Teaching) Group, School of Geodesy and Geomatics, Wuhan University.




We use the camodocal project to handle camera models, while we modify it to a minimal version which doesn't need Ceres.

The codebase and documentation is licensed under the GNU General Public License v3 (GPL-3).