chequanghuy / TwinLiteNet

MIT License
132 stars 32 forks source link

TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars

Requirement

See requirements.txt for additional dependencies and version requirements.

pip install -r requirements.txt

Data Preparation

/data
    bdd100k
        images
            train/
            val/
            test/
        segments
            train/
            val/
        lane
            train/
            val/

Pipeline

Train

python3 main.py

Test

python3 val.py

Inference

Images

python3 test_image.py

Visualize

Drive-able segmentation

Lane Detection

Acknowledgement

Our source code is inspired by:

Citation

If you find our paper and code useful for your research, please consider giving a star :star: and citation :pencil: :

@INPROCEEDINGS{10288646,
  author={Che, Quang-Huy and Nguyen, Dinh-Phuc and Pham, Minh-Quan and Lam, Duc-Khai},
  booktitle={2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)}, 
  title={TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars}, 
  year={2023},
  volume={},
  number={},
  pages={1-6},
  doi={10.1109/MAPR59823.2023.10288646}}

TwinLiteNetV2: A small stone can kill a giant

🚀 Coming soon!

PWC PWC

Model size
(Height x Width)
Lane
(Accuracy)
Lane
(IOU)
Drivable Area
(mIOU)
params
(M)
FLOPs
(B)
[TwinLiteNetV2-Nano]() 384 x 640 70.8 23.6 87.2 0.03 0.485
[TwinLiteNetV2-Small]() 384 x 640 75.9 28.7 90.4 0.14 1.366
[TwinLiteNetv2-Medium]() 384 x 640 79.3 32.6 92.3 0.62 5.088
[TwinLiteNetV2-Large]() 384 x 640 81.7 34.2 92.9 2.78 21.526