Open rishabh-akridata opened 1 month ago
What do you mean by low? The scores are normalized in the range 0,1 with 0 being normal and 1 being abnormal
@alexriedel1 The scores are normalized between 0 and 1. But they are not high for the anomalous samples. Mostly all the scores are close to 0.5 only.
Assuming you have trained your Model using normal and abnormal Images, maybe the difference between normal and abnormal isnt very large. The training procedure tries to normalize the outputs in a way that 0.5 is the threshold for abnormal. You could try to use a different algorithm than efficientad and see if the problem still exists. If yes, then the Chance is high that your normal and abnormal are Not different enough. Maybe you can show some of your Images?
@alexriedel1 I am getting the following anomaly scores on the MVTEC bottle class. Is this reasonable? I tried other methods such as Padim and the scores are pretty high such as 0.70, and 0.94 for anomalous images. Is this because the Padim is a distance-based method and efficientAD is a reconstruction-based method?
How do you obtain these results? Through validation while Training or through testing After training? Can you Share your Code?
@alexriedel1 I used patchcore and EfficientAD to classify the same batch of data sets respectively. The F1Score and AUROC of patchcore are much better than EfficientAD, and the heat map of patchcore can clearly reflect the defective area. EfficientAD can hardly reflect the defective area. It was all trained in the past. 100 epochs. I would like to know if there is room for improvement when changing EfficientAD’s epoch to 1000. I don’t have mask annotation data. Is there any room for improvement, especially the defects and accuracy of heat maps?
Hi, The pred scores of the efficient AD model are coming very low. Any idea, what is the reason behind this?
Thanks.