Closed stvogel closed 2 years ago
I have tried layers 3,4 in the industrial version of this code : (https://github.com/dhkdnduq/PatchCore_anomaly_detection) like this: self.model.layer3[-1].register_forward_hook(hook_t) self.model.layer4[-1].register_forward_hook(hook_t)
For my input images, the results got lower auc_roc than layer2,3.
Thanks for your comment, @royarahimzadeh .
I also had to adapt the reshape here (when I switched to only layer3 and 4):
anomaly_map = score_patches[:, 0].reshape((28, 28))
But on some image-datasets the numbers in the score_patches were too high, so that w = (1 - (np.max(np.exp(N_b)) / np.sum(np.exp(N_b))))
reached NaN.
And I also tried to use all three layers (2, 3 and 4). In this case I didn't have to adapt the reshaping of course. But the results also got a tiny lower auc_roc.
As this wasn't really an issue but a point of interest, I simply close it.
Thanks for this great implementation. In the paper the authors claim in "Evaluation on other benchmarks":
I tried to simply swap layer2 and 3 to 3 and 4 and had to adapt the reshaping later on. But the results were a complete failure. The heatmap is always blank.
Has someone tried to use layer4 successfully?