Closed Yibin122 closed 4 years ago
Thank you so mouch for your remind, I have corrected it. But the total F1 measure 73.9 of Better-CycleGAN + ERFNet is nothing wrong.
Our paper offer a new efficient data enhancement for lane detection. As you can see, our method can increase the environmental adaptability for lane detetcor. However, it's possible to get better results according to the selected image and the image translation method. In my opinion, the quality of generated images is also important to make sure its availability on other datasets.
Maybe you can share the selected image list or the image translation method you choose here. Thank you!
I totally agree with your opinion.
For image translation, I used the original CycleGAN but with a higher resolution for image input. While it fits any input size, it is not clear to me how you exactly modified the generator network structure.
My image list is nothing fancy. I just evenly selected 13000 out of those day images in CULane training set.
Thanks for your agreement and following to our papers! High resolution is important to style-transfer-based data enhancement. As for the concrete operations on generator, chapter3 in our papers gives a detailed introduction. We can keep in touch if there are any questions.
For Curve category, the F1 measure of ERFNet should be 66.3 instead of 71.6 https://github.com/cardwing/Codes-for-Lane-Detection
I ran the evaluation code on your trained model, and obtained exactly the same metrics except F1-measure corresponding to the Curve category. On my side, its 67.1 different from 72.4 reported in your table. Please check that.
Btw, I was able to achieve even better results following your paper, although my generated fake night images do not look as good as yours. I guess it also depends on the data selection for image translation. Nevertheless, I look forward to your releasing the source code for Better-CycleGAN.
Thanks!