cguindel / eval_kitti

Tools to evaluate object detection results using the KITTI dataset.
57 stars 23 forks source link

A little question #5

Closed zqdeepbluesky closed 5 years ago

zqdeepbluesky commented 6 years ago

hi, thanks for your code ,I have a little question about lists on your readme,just like you say: "lists, containing the .txt files with the train/validation splits. These files are expected to contain a list of the used image indices, one per row." "evaluate_object should be called with the name of the results folder and the validation split; e.g.: ./evaluate_object leaderboard valsplit" I wonder whether valsplit.txt is refer rotest.txt,cause we split the txt to train.txt, test.txt, trainval.txt, val.txt,so I don't know valsplit.txt Refer to which txt,can you please help me ,thanks so much,have a good day.

cguindel commented 6 years ago

Hi @zqdeepbluesky. The name of the file with the image indexes is just an example since the command line arguments should be certainly modified according to your setup. As you can check here, I do indeed have a valsplit.txt file, and that is why I use that name in the usage sample. You only have to make sure that you have ground-truth labels for the images that you are using for evaluation.

zqdeepbluesky commented 6 years ago

thanks so much for your answer,I will try it on my evaluation labels.

DonghoonPark12 commented 5 years ago

@cguindel Hi cguindel. in your example [e.g.: ./evaluate_object leaderboard valsplit] what file is included in 'leaderboard'? and you said # of argc is three('data/object/label_2' 'lists' 'results') but why your # of argc is two?

I think more detail explainnation is needed.

cguindel commented 5 years ago

Assuming that you are using the master branch, evaluate_object only needs two arguments: the first one is the name of the experiment (leaderboard in the example) and the second one is the list of images used for validation (e.g., valsplit). The set of .txt files containing the detections for each frame is supposed to be at build/results/<experiment name>/data. So, if you want to evaluate results for, let's say, "experiment1" (you can choose the name that you want), make sure that build/results/experiment1/data is populated with the .txt files obtained with the detector, one per image in the validation set.

DonghoonPark12 commented 5 years ago

@cguindel It works really thanks.