Closed HustHB closed 3 years ago
Hi, all of the metrics that we report in the paper are about classification, as it is the most relevant metric in industrial defect detection.
Hi, all of the metrics that we report in the paper are about classification, as it is the most relevant metric in industrial defect detection.
The AP is adopted the metric for evaluation, would you provide the AUC values both in this repo?
Hi, all of the metrics that we report in the paper are about classification, as it is the most relevant metric in industrial defect detection.
Thanks for your rapid reply! ^_^
First, thanks for your public nice code!
I just wonder whether the AP metric in paper is about classification rather than semantic segmentation. And in your "end2end.py " L246, it seems only evaluate for "predictions" which is result of classification.
By the way, I think "weakly supervision" is not precise in your mix supervision setting, because if your final evaluation is only about classification, class-level, pixel-level labels will all be full-supervised label for classification.