SeokjuLee / VPGNet

VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition (ICCV 2017)
MIT License
483 stars 165 forks source link
autonomous-driving caffe dataset lane-detection semantic-segmentation vpgnet

VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition

International Conference on Computer Vision (ICCV) 2017

In this paper, we propose a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions. We tackle rainy and low illumination conditions, which have not been extensively studied until now due to clear challenges. For example, images taken under rainy days are subject to low illumination, while wet roads cause light reflection and distort the appearance of lane and road markings. At night, color distortion occurs under limited illumination. As a result, no benchmark dataset exists and only a few developed algorithms work under poor weather conditions. To address this shortcoming, we build up a lane and road marking benchmark which consists of about 20,000 images with 17 lane and road marking classes under four different scenarios: no rain, rain, heavy rain, and night. We train and evaluate several versions of the proposed multi-task network and validate the importance of each task. The resulting approach, VPGNet, can detect and classify lanes and road markings, and predict a vanishing point with a single forward pass. Experimental results show that our approach achieves high accuracy and robustness under various conditions in real-time (20 fps).

Supplementary Video

Citation

Baseline Usage

  1. Clone the repository

    git clone https://github.com/SeokjuLee/VPGNet.git
  2. Prepare dataset from Caltech Lanes Dataset.

    • Download Caltech Lanes Dataset.
    • Organize the file structure as below.
      |__ VPGNet
      |__ caffe
      |__ caltech-lanes-dataset
          |__ caltech-lane-detection/matlab
          |__ cordova1
          |__ cordova2
          |__ washington1
          |__ washington2
          |__ vpg_annot_v1.m
    • Generate list files using caltech-lanes-dataset/vpg_annot_v1.m. Arrange training and validation sets as you wish.
  3. Caffe compliation

    • Compile our Caffe codes following the instructions.
    • Go to our workspace.
      cd caffe/models/vpgnet-novp
  4. Make LMDB

    • Change paths in make_lmdb.sh and run it. The LMDB files would be created.
  5. Training

    • Run train.sh

VPGNet Dataset

  1. Download

    • If you would like to download the VPGNet dataset, please fill out a survey. We will send you an e-mail with a download link.
  2. Dataset overview

    • We categorize the scenes according to the time of the day and weather conditions (Please check our paper).
      scene_1: daytime, no rain
      scene_2: daytime, rain
      scene_3: daytime, heavy rain
      scene_4: night
    • File structure
      |__ VPGNet-DB-5ch
      |__ scene_1
          |__ $TIMESTAMP
              |__ $FRAMEIDX.mat
      |__ scene_2
      |__ scene_3
      |__ scene_4
  3. Formatting

    • We parse 640x480 RGB image (3ch), segmentation (1ch), and vanishing point (1ch) labels into a blob of 5 channels as a $FRAMEIDX.mat (MATLAB) file.
    • For class labels, please refer to vpgnet-labels.
  4. Visualization