Closed suchiek closed 6 years ago
Hi. We just kept the structure just as in the original implementation. If you look into the kitti dataset you see for a single image a label.txt with following lines:
Car 0.00 2 1.93 1080.22 179.61 1166.21 213.88 1.40 1.68 4.45 22.90 1.73 32.18 2.54
Car 0.00 1 2.02 952.36 179.80 1056.51 225.44 1.47 1.57 4.24 14.03 1.74 25.64 2.51
...
We don't use all the parameters like rotation of opacity. So if you want the labels for a single image for every object there should be a line in a file like this:
label_string , 0, 0, 0, left, top, right, bottom, 0, 0, 0, 0, 0, 0, 0
Your input to the training is a list of all paths to the images and a list of the paths to all the label.txt files.
Hi,
I would like to use this repository to do some detections with my own labelled data and the annotations are in a csv file. Please let me know the exact layout of the annotations file that I need to use.
thanks!