VisionLearningGroup / taskcv-2017-public

168 stars 36 forks source link

Problems in eval.py #2

Closed cliang1453 closed 7 years ago

cliang1453 commented 7 years ago

In eval.py def fast_hist(a, b, n): k = (a >= 0) & (a < n) return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n) , the same mask k is used for both ground truth a as well as the prediction b. For this line to work, it requires the prediction b[k] containing only labels of 0-18 but no labels of 255. However, if we train the model with 0, 1, 2,..., 18, 255 (19+1 classes in total), it is unavoidable some predictions in b[k] containing labels of 255. And the above line of evaluation would not work.

So how many classes should we train the model and run prediction with? 0, 1, 2,..., 18, 255? or 0, 1, 2,...,18? If using 0, 1, 2,...,18, how do we deal with those "out of interests" classes (merge with classes that we are interested?)?

Also, what is the functional difference between eval.py and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py?

jhoffman commented 7 years ago

You should only be training with 19 classes (0-18). The 255 value which appears in some of the ground truth data are ignore labels. Most software packages have a flag in their predefined loss functions to handle ignore labels. For example in Caffe the cross entropy loss has an ignore flag. To use with value 255 you would add the ignore: 255 as a parameter to the cross entropy loss during training.

Our eval function is written so as not to be specific to cityscapes, but should be functionally equivalent for computing mIoU.

cliang1453 commented 7 years ago

I get it. Thank you very much! By the way, we really appreciate your work in FCNs in the Wild, is the code implementation going to be released in the near future? Thank you!

jhoffman commented 7 years ago

Yes, FCNs in the wild code will be released soon so that others may build on it for this challenge. The source model released with this challenge is the one used in that paper.