mcordts / cityscapesScripts

README and scripts for the Cityscapes Dataset
MIT License
2.19k stars 608 forks source link

possible bug in method "evaluateMatches" in evalInstanceLevelSemanticLabeling.py #114

Open LeoGuo98 opened 4 years ago

LeoGuo98 commented 4 years ago

It seems like you only ensured that two predictions could not match the same one GT, but you could not ensure that one prediction could only match one GT. In the for loop starting at line 423, for each GT, you made sure that there is at most one true positive matching, but you did not check if one detection could match to at least one GT. In the for loop starting at line 455, you also did not check that.

I am trying to evaluate the model's performance on the individual images in the datasets. Now, it seems like the number of ignored detection plus the length of curScore/curTrue (which equals to the number of tp + the number of fp), does not equal to the number of total detection in some cases.

The problem is that the number of predictions that could set foundGt to True does not equal to the length of curTrue at line 451