lucastabelini / LaneATT

Code for the paper entitled "Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection" (CVPR 2021)
https://openaccess.thecvf.com/content/CVPR2021/html/Tabelini_Keep_Your_Eyes_on_the_Lane_Real-Time_Attention-Guided_Lane_Detection_CVPR_2021_paper.html
MIT License
632 stars 166 forks source link

Questions about anchor masks #45

Closed BatmanofZuhandArrgh closed 3 years ago

BatmanofZuhandArrgh commented 3 years ago

I'm retraining a model using both CuLane, TuSimple, Llamas and some other dataset. Since you calculated the anchor freq of the dataset before training, keeping only 1000 anchors for every inference and training session, should I do the same for a much larger dataset with a larger variety of lane positions? Since you hardcoded keeping only 1000 anchors, inference came in conflict with the attention module when I kept all anchors. Can you confirm that this is an issue and I should modify the attention module if I want to keep all anchors?

Also, I've tested CuLane model on TuSImple and the other way around. I got really bad stats, F1 of zero to 0.2. Can someone else confirm this?

lucastabelini commented 3 years ago

I'm retraining a model using both CuLane, TuSimple, Llamas and some other dataset. Since you calculated the anchor freq of the dataset before training, keeping only 1000 anchors for every inference and training session, should I do the same for a much larger dataset with a larger variety of lane positions? Since you hardcoded keeping only 1000 anchors, inference came in conflict with the attention module when I kept all anchors. Can you confirm that this is an issue and I should modify the attention module if I want to keep all anchors?

I am not sure what you're referring to. The value is not hardcoded. It is set in the config file here. If you train with N anchors you'll have to run the inference with N anchors. If you want to try out the model on a new dataset you will have to compute the anchor frequencies for this dataset.

Also, I've tested CuLane model on TuSImple and the other way around. I got really bad stats, F1 of zero to 0.2. Can someone else confirm this?

This is expected. The datasets are very different geometrically (e.g., their image's aspect rations are very different).

BatmanofZuhandArrgh commented 3 years ago

Thanks for the help