Closed miao02830 closed 2 years ago
There are mainly 2 formats for lane evaluation on the market. One is the TuSimple format as you just described, the other would be the CULane format. with a directory of txt files, maybe you can try if that format is easier? CULane metric does not constrain a fixed set of H_samples.
You mean that it is recommended to label my dataset in the format of culane instead of tusimple. It will be simpler. Is my understanding correct?thank you!
Well, it depends on what you have. Do you already have labels? Mainly, if they are labeled with fixed set of heights, TuSimple is better, if they are labeled with free points, CULane is better.
The CULane metric and annotation format is also better accepted by the community as well. You can see the new LLAMAS benchmark adopting this standard.
okay,i got it . i will try it.thankyou very much!
It seems the problem is resolved, I'll close for now. Feel free to reopen!
In the code, I see that in the evaluation of the recognition results,use these two json files: ../../output/erfnet_baseline_tusimple.json & /home/data/Tusimple/test_label.json
I would like to ask you if I want to evaluate my dataset, do I also need to generate json files in this format. This file contains lanes, H samples,raw file. I've been looking for many websites these two days, but I haven't found a way to generate this JSON file.
If I want to use your method to train and evaluate my data set, is there any other method?
thankYou !!