Closed mihaifieraru closed 6 years ago
one can fix it by adding this after https://github.com/leonid-pishchulin/poseval/blob/master/py/eval_helpers.py#L601
mot = {}
for i in range(nJoints):
mot[i] = {}
for i in range(nJoints):
ridxsGT = np.argwhere(hasGT[:, i] == True);
ridxsGT = ridxsGT.flatten().tolist()
ridxsPr = np.argwhere(hasPr[:, i] == True);
ridxsPr = ridxsPr.flatten().tolist()
mot[i]["trackidxGT"] = [trackidxGT[idx] for idx in ridxsGT]
mot[i]["trackidxPr"] = [trackidxPr[idx] for idx in ridxsPr]
mot[i]["ridxsGT"] = np.array(ridxsGT)
mot[i]["ridxsPr"] = np.array(ridxsPr)
mot[i]["dist"] = np.full((len(ridxsGT), len(ridxsPr)), np.nan)
Hey were you able to get pose track evaluation to work with video evaluation? I always get a nan output in the above function. Also I notice that nowhere in the function is the actual pixel positions passed to the mot object.
Yes, the evaluation works for me. Can you please tell me the command you're running and what exact error you get?
Okay, I am currently using the following file which I generated for both ground truth and predictions. eval.tar.gz
As you can see, it currently has the sequence for one video seq: images/bonn/000001_bonn/
This is a dataset that does not have any instance of RIGHT ANKLE.
When I run this with the following command:
python evaluate.py -g ../../lmdb/eval_truth/ -p ../../lmdb/eval/ -t
It crashes with:
Traceback (most recent call last):
File "evaluate.py", line 67, in <module>
main()
File "evaluate.py", line 53, in main
metricsAll = evaluateTracking(gtFramesAll,prFramesAll)
File "/media/raaj/Storage/video_datasets/posetrack_valscripts/py/evaluateTracking.py", line 124, in evaluateTracking
metricsAll = computeMetrics(gtFramesAll, motAll)
File "/media/raaj/Storage/video_datasets/posetrack_valscripts/py/evaluateTracking.py", line 85, in computeMetrics
metricsMid = mh.compute(accAll[i], metrics=metricsMidNames, return_dataframe=False, name='acc')
File "/usr/local/lib/python2.7/dist-packages/motmetrics/metrics.py", line 127, in compute
df = df.events
File "/usr/local/lib/python2.7/dist-packages/motmetrics/mot.py", line 231, in events
self.cached_events_df = MOTAccumulator.new_event_dataframe_with_data(self._indices, self._events)
File "/usr/local/lib/python2.7/dist-packages/motmetrics/mot.py", line 271, in new_event_dataframe_with_data
raw_type = pd.Categorical(tevents[0], categories=['RAW', 'FP', 'MISS', 'SWITCH', 'MATCH'], ordered=False)
IndexError: list index out of range
I did some debugging, and it seems that if any dataset is completely not having a particular body part annotated, it will crash at that part. You can see that by adding a print here:
# compute intermediate metrics per joint per sequence
for i in range(nJoints):
print i
metricsMid = mh.compute(accAll[i], metrics=metricsMidNames, return_dataframe=False, name='acc')
for name in metricsMidNames:
metricsMidAll[name][0,i] += metricsMid[name]
metricsMidAll['sumD'][0,i] += accAll[i].events['D'].sum()
Also when I try it with single frame, I see that ankle gives me 50%
Evaluation of per-frame multi-person pose estimation Average Precision (AP) metric: & Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\ &100.0 &100.0 &100.0 &100.0 &100.0 &100.0 & 50.0 & 93.3 \
But my dataset does not even have one of the ankles so how can it give an error. I set the prediction and GT to the same file
I'll look into that
Hi @leonid-pishchulin just to check will you be looking into my issue as well?
yeah, but it might be a few days until I get to it
I have the same problem like this:
Also when I try it with single frame, I see that ankle gives me 50%
Evaluation of per-frame multi-person pose estimation Average Precision (AP) metric: & Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\ &100.0 &100.0 &100.0 &100.0 &100.0 &100.0 & 50.0 & 93.3 \
But my dataset does not even have one of the ankles so how can it give an error. I set the prediction and GT to the same file
the above issues might have been fixed as a part of the update update I pushed yesterday. This update also allows to save evaluation results per sequence. Try again please.
hey,
what should happen with the variable mot when the following IF statement is False ? https://github.com/leonid-pishchulin/poseval/blob/master/py/eval_helpers.py#L507
at the moment, when either the GT or the prediction is missing, the mot just gets the value of the previous imgidx; unless imgidx=0, in which case you actually get an error.
thanks a lot!