Open liwei33660 opened 1 year ago
Hello,
In my opinion, the "Anomaly" defined in anomaly detection should be unknown, so we shouldn't validate the algorithm on real anomaly dataset. However, we know that it's necessary to find a validation procedure to tune the hyper-parameters. Recently, I find some studies related to this: (1) We can generate pseudo anomaly dataset to tune the model. http://medicalood.dkfz.de/web/ (2) As the number of iterations increases the model should converge to the optimum, so we may not need to use the validation set to find the best epoch. https://github.com/zhiyuanyou/UniAD
I think the study area of anomaly detection is still developing and recent studies are aiming to solve this problem.
Thank you for sharing!
Your clear and compact declaration inspired me a lot. But I get some issues assessing the validity of the evaluation. As we know, RD4AD was evaluated on the widely used MVTecAD dataset and has demonstrated amazing performance. Since MVTecAD contains only the training and test set, the paper has followed its dataset setup and does not mention the concept of the validation set. However, I am still confused about this:
Thank you very much for your kind reading!