mcordts / cityscapesScripts

README and scripts for the Cityscapes Dataset
MIT License
2.19k stars 608 forks source link

Instance level evaluation crashes when only prediction of labelName is being ignored #89

Open tomsal opened 5 years ago

tomsal commented 5 years ago

Hi!

I've experienced this crash a couple of times already when using the evaluation script on only a few images. The error message I get is the following:

...
File "[...]/evalInstanceLevelSemanticLabeling.py", line 503, in evaluateMatches
    nbTrueExamples = yTrueSortedCumsum[-1]
IndexError: index -1 is out of bounds for axis 0 with size 0

So this is line 503.

I tried to track it down and this what I think happens: It happens when the for loop (L398) reaches a labelName (e.g. "bicycle") which does have both ground truth and, for example, one prediction (thus, haveGt and havePred are both True), but they never match. In this case, the prediction should be accounted for as a false positive. However, when it overlaps significantly with an ignore region, then it will be ignored instead (L475). As a consequence, len(y_score) == 0 which results in the error above.

My current solution is to replace (L485) by if haveGt and havePred and len(y_score) > 0:. So in the case where it crashed so far, it instead assigns apCurrent = 0.0 (L541).

I haven't tested whether this workaround has further consequences, but I do not think so. It is however rather unlikely to run into this problem, when dealing with more images. Thus, I don't think it has high priority. Let me know if I should create a pull request or a minimal working example.

ABDULMAJID01 commented 4 years ago

hello, @tomsal I am trying to perform evaluation but facing same issue. And when I replace the code at (L485) that you mention above then my instance segmentation result is zero. Could your problem is solved. kindly guide me, thanks in advance....

################################################## what : AP AP_50% ################################################## person : 0.000 0.000 rider : 0.000 0.000 car : 0.000 0.000 truck : 0.000 0.000 bus : 0.000 0.000 train : 0.000 0.000 motorcycle : 0.000 0.000 bicycle : 0.000 0.000

average : 0.000 0.000

[10/13 14:33:14 segmentation_test]: OrderedDict([('segm', {'AP': 0.0, 'AP50': 0.0})]).

tomsal commented 4 years ago

Hi @ABDULMAJID01!

I haven't looked into this for a while. You could try what I've suggested above. Note that as I said before, I don't know whether this workaround might have other consequences.

My current solution is to replace (L485) by if haveGt and havePred and len(y_score) > 0:. So in the case where it crashed so far, it instead assigns apCurrent = 0.0 (L541).