XingangPan / SCNN

Spatial CNN for traffic lane detection (AAAI2018)
MIT License
810 stars 242 forks source link

Tensorflow version of SCNN is available now #61

Closed cardwing closed 5 years ago

cardwing commented 5 years ago

I have implemented SCNN using Tensorflow and put the full codes here. You can test the code in popular lane detection benchmarks like TuSimple, CULane and BDD100K or your custom dataset with minor modification. Welcome to raising issues if you can not reproduce the results. @ytzhao , @ding-hai-tao, @ycszen, @FrancisGee, @hewumars

cardwing commented 5 years ago

My code is based on this repo. Thanks for your shared codes. @XingangPan

yinhai86924 commented 5 years ago

Tensorflow version of SCNN版本是基于Lanenet网络方法写的吗 还是基于之前的Spatial CNN for Traffic Lane Detection,我下载了Tensorflow version of SCNN代码感觉和之前的Lanenet检测方法很类似。

cardwing commented 5 years ago

@yinhai86924, Tensorflow version of SCNN就是在Lanenet的代码基础上改写得到的,主要不同的地方是:1. 网络结构中加入了message passing模块,同时用spatial upsampling替代了基于FCN的decoder模块;2.多输出,包括multi-channel的概率图输出和对于每条lane的二值预测;3.存储概率图的路径是按照SCNN来的,这样便于后面的测试。

zimurui commented 5 years ago

Hi. I copy your full SCNN tensorflow code and CULanes datasets. But the loss is very small from epoch 1(shown bellown).However, the model for test runs OK. So I'm confused that the model is trainged from your repo code? The lane mask is a small part in one image, so I doubt the softmax would't works fine for this situation?

Epoch: 1 loss_ins= 0.637915 (0.637915) loss_ext= 0.696006 (0.696006) accuracy= 0.178814 (0.178814) accuracy_back= 0.235519 (0.235519) mean_time= 50.792227s Epoch: 2 loss_ins= 0.341373 (0.341373) loss_ext= 0.670495 (0.670495) accuracy= 0.021013 (0.021013) accuracy_back= 0.889111 (0.889111) mean_time= 0.332169s Epoch: 3 loss_ins= 0.167313 (0.254343) loss_ext= 0.682850 (0.676673) accuracy= 0.000000 (0.010506) accuracy_back= 0.998962 (0.944036) mean_time= 0.328959s Epoch: 4 loss_ins= 0.165585 (0.224757) loss_ext= 0.794752 (0.716033) accuracy= 0.000000 (0.007004) accuracy_back= 0.999992 (0.962688) mean_time= 0.335514s Epoch: 5 loss_ins= 0.320033 (0.248576) loss_ext= 0.667665 (0.703941) accuracy= 0.000000 (0.005253) accuracy_back= 1.000000 (0.972016) mean_time= 0.335776s Epoch: 6 loss_ins= 0.311843 (0.261229) loss_ext= 0.636154 (0.690383) accuracy= 0.000000 (0.004203) accuracy_back= 1.000000 (0.977613) mean_time= 0.335177s Epoch: 7 loss_ins= 0.259280 (0.260904) loss_ext= 0.627233 (0.679858) accuracy= 0.000000 (0.003502) accuracy_back= 1.000000 (0.981344) mean_time= 0.334006s Epoch: 8 loss_ins= 0.173998 (0.248489) loss_ext= 0.688290 (0.681063) accuracy= 0.000000 (0.003002) accuracy_back= 1.000000 (0.984009) mean_time= 0.334256s Epoch: 9 loss_ins= 0.178677 (0.239763) loss_ext= 0.516212 (0.660456) accuracy= 0.000000 (0.002627) accuracy_back= 1.000000 (0.986008) mean_time= 0.332782s

cardwing commented 5 years ago

The loss is relatively big since at the end of the training process, the segmentation loss should be around 0.01 ~ 0.03. Just train the model for more episodes.