PyTorch(1.5.0) training, evaluating and pretrained models for LSTR (Lane Shape Prediction with Transformers). We streamline the lane detection to a single-stage framework by proposing a novel lane shape model that achieves 96.18 TuSimple accuracy.
For details see End-to-end Lane Shape Prediction with Transformers by Ruijin Liu, Zejian Yuan, Tie Liu, Zhiliang Xiong.
【2021/12/03】:fire: Our new work Learning to Predict 3D Lane Shape and Camera Pose from a Single Image via Geometry Constraints by Ruijin Liu, Dapeng Chen, Tie Liu, Zhiliang Xiong, Zejian Yuan is accepted by AAAI2022! The preprint paper and codes will be released soon!
【2021/11/23】 We now support Train and Test Custom Data. Tutorial: Train and Test Your Custom Data.
【2021/11/16】 We fix the Multi-GPU Training.
【2020/12/06】 We now support CULane Dataset.
We provide the baseline LSTR model file (trained on TuSimple train and val sets after 500000 iterations) in the ./cache/nnet/LSTR/LSTR_500000.pkl (~3.1MB).
Download and extract TuSimple train, val and test with annotations from TuSimple. We expect the directory structure to be the following:
TuSimple/
LaneDetection/
clips/
label_data_0313.json
label_data_0531.json
label_data_0601.json
test_label.json
LSTR/
conda env create --name lstr --file environment.txt
After you create the environment, activate it
conda activate lstr
Then
pip install -r requirements.txt
To train a model:
(if you only want to use the train set, please see ./config/LSTR.json and set "train_split": "train")
python train.py LSTR
To train a model from a snapshot model file:
python train.py LSTR --iter 500000
To evaluate (GPU 603MiB usage when evaluating single image iteratively), then you will see the paper's result:
python test.py LSTR --testiter 500000 --modality eval --split testing
To evaluate FPS (set --batch to maximum the FPS, GPU 877MiB usage if you repeat each image 16 times):
python test.py LSTR --testiter 500000 --modality eval --split testing --batch 16
To evaluate and save detected images in ./results/LSTR/500000/testing/lane_debug:
python test.py LSTR --testiter 500000 --modality eval --split testing --debug
To evaluate and save decoder attention maps (store --debugEnc to visualize encoder attention maps):
python test.py LSTR --testiter 500000 --modality eval --split testing --debug --debugDec
To evaluate on a set of images (store your images in ./images, then the detected results will be saved in ./detections):
python test.py LSTR --testiter 500000 --modality images --image_root ./ --debug
@InProceedings{LSTR,
author = {Ruijin Liu and Zejian Yuan and Tie Liu and Zhiliang Xiong},
title = {End-to-end Lane Shape Prediction with Transformers},
booktitle = {WACV},
year = {2021}
}
LSTR is released under BSD 3-Clause License. Please see LICENSE file for more information.
We actively welcome your pull requests!