Open tomsal opened 5 years ago
hello, @tomsal I am trying to perform evaluation but facing same issue. And when I replace the code at (L485) that you mention above then my instance segmentation result is zero. Could your problem is solved. kindly guide me, thanks in advance....
average : 0.000 0.000
[10/13 14:33:14 segmentation_test]: OrderedDict([('segm', {'AP': 0.0, 'AP50': 0.0})]).
Hi @ABDULMAJID01!
I haven't looked into this for a while. You could try what I've suggested above. Note that as I said before, I don't know whether this workaround might have other consequences.
My current solution is to replace (L485) by if haveGt and havePred and len(y_score) > 0:. So in the case where it crashed so far, it instead assigns apCurrent = 0.0 (L541).
Hi!
I've experienced this crash a couple of times already when using the evaluation script on only a few images. The error message I get is the following:
So this is line 503.
I tried to track it down and this what I think happens: It happens when the for loop (L398) reaches a
labelName
(e.g. "bicycle") which does have both ground truth and, for example, one prediction (thus,haveGt
andhavePred
are bothTrue
), but they never match. In this case, the prediction should be accounted for as a false positive. However, when it overlaps significantly with an ignore region, then it will be ignored instead (L475). As a consequence,len(y_score) == 0
which results in the error above.My current solution is to replace (L485) by
if haveGt and havePred and len(y_score) > 0:
. So in the case where it crashed so far, it instead assignsapCurrent = 0.0
(L541).I haven't tested whether this workaround has further consequences, but I do not think so. It is however rather unlikely to run into this problem, when dealing with more images. Thus, I don't think it has high priority. Let me know if I should create a pull request or a minimal working example.