Closed tlatlbtle closed 5 years ago
you have to provide predictions for the full set of GT sequences, no matter if some of the GT sequences do not have annotations. Then, evaluation script will only consider prediction sequences for which GT keypoints exist, and ignore the rest of the prediction sequences.
I use predictions for the whole sequences in validation dataset ( e.g. prediction consist of 117 frames for 001735_mpii_test.json) ,but when I run the following command:
python evaluate.py \ --groundTruth=/home/wjb/data/posetrack/posetrack_data/annotations/val/ \ --predictions=/home/wjb/posetrack_2018/posetrack_v2.0_val_json \ --evalPoseTracking \ --evalPoseEstimation
It asserts error:
Traceback (most recent call last):
File "evaluate.py", line 72, in
main()
File "evaluate.py", line 32, in main
gtFramesAll,prFramesAll = eval_helpers.load_data_dir(argv)
File "/poseval/py/eval_helpers.py", line 407, in load_data_dir
raise Exception('# prediction frames %d <> # GT frames %d for %s' % (len(pr),len(gt),predFilename))
Exception: # prediction frames 117 <> # GT frames 52 for /001735_mpii_test.json
I found that in File "/poseval/py/eval_helpers.py", line 407:
if (len(pr) <> len(gt)):
raise Exception('# prediction frames %d <> # GT frames %d for %s' % (len(pr),len(gt),predFilename
It seems that you will check whether the number of frames in gt and pred are equal. However, if the prediction in 2017 format while gt in 2018 format, after convert_videos process, the code will convert gt in 2018 format to 2017 format compatible with prediction.
In "convert_videos" function, when convert gt annotations via "from_new" function, the video.frames value only consist cases with annotations. Is that right?
So after "to_old" process, the gt only consist of 57 frames while the prediction has 117 frames, thus assert such error.
you have to provide predictions in the same format as the ground truth. So if GT is in posetrack18 format then predictions must be in the same posetrack18 format.
It seems that when convert gt annoations to 2017 format. In the function to_old in convert.py 129 line.
The predictions will have result with all of the frames in one video, but the groundTruth only consider cases with annotations. So it may occur following error:
` Traceback (most recent call last): File "evaluate.py", line 72, in
main()
File "evaluate.py", line 32, in main
gtFramesAll,prFramesAll = eval_helpers.load_data_dir(argv)
File "/poseval/py/eval_helpers.py", line 407, in load_data_dir
raise Exception('# prediction frames %d <> # GT frames %d for %s' % (len(pr),len(gt),predFilename))
Exception: # prediction frames 117 <> # GT frames 52 for /001735_mpii_test.json
`