wvangansbeke / LaneDetection_End2End

End-to-end Lane Detection for Self-Driving Cars (ICCV 2019 Workshop)
https://arxiv.org/pdf/1902.00293.pdf
Other
643 stars 216 forks source link

ENet-Label-Torch is available now (a light-weight and effective lane detection model) #32

Closed cardwing closed 5 years ago

cardwing commented 5 years ago

Our ENet-Label-Torch has been released. More details can be found in my repo.

Key features:

(1) ENet-label is a light-weight lane detection model based on ENet and adopts self attention distillation (more details can be found in our paper which will be published soon).

(2) It has 20 × fewer parameters and runs 10 × faster compared to the state-of-the-art SCNN, and achieves 72.0 (F1-measure) on CULane testing set (better than SCNN which achieves 71.6).

(Do not hesitate to try our model!!!)

Performance on CULane testing set (F1-measure):

Category SCNN-Torch SCNN-Tensorflow ENet-Label-Torch
Normal 90.6 90.2 90.7
Crowded 69.7 71.9 70.8
Night 66.1 64.6 65.9
No line 43.4 45.8 44.7
Shadow 66.9 73.8 70.6
Arrow 84.1 83.8 85.8
Dazzle light 58.5 59.5 64.4
Curve 64.4 63.4 65.4
Crossroad 1990 4137 2729
Total 71.6 71.3 72.0
Runtime(ms) 133.5 -- 13.4
Parameter(M) 20.72 -- 0.98
ConerK commented 5 years ago

和 LaneDetection_End2End 相比,哪个速度快精度高呢

cardwing commented 5 years ago

@ConerK Since SCNN is the best algorithm in TuSimple and ENet-Label-Torch outperforms SCNN in TuSimple, I think ENet-Label-Torch should outperform LaneDetection_End2End in terms of accuracy. As to speed, you can have a test.

ConerK commented 5 years ago

@cardwing OK,Thanks

wvangansbeke commented 5 years ago

@cardwing,

Thanks for letting me know. I will check it out.

Best, Wouter

Msabih commented 5 years ago

@cardwing

Could you please tell about the accuracy score that you obtained on tusimple and if you used only tusimple for training ?

cardwing commented 5 years ago

@Msabih ENet-label achieves 96.64% in TuSimple testing set, which outperforms SCNN (96.53%).

Msabih commented 5 years ago

@cardwing Thanks for the reply. Just another question. The tusimple benchmark provides y samples and lane points with respect to the image size of 720 x 1280. Did you train and test your model with image size of 720 x 1280 ?

If you resized the images to a lower resolution then you either have to resize the prediction images back to 720 x 1280 or you could also downsample the tusimple benchmark points and get evaluation ? Which is your approach or which one do you think is better ?

cardwing commented 5 years ago

I resize the input image to be 368 × 640, which follows the data processing of SCNN. However, a better solution is to remove the areas which do not have lanes, i.e., remove the upper areas of the image. And in this condition, you can use the full (cropped) image as input.