An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
In first case, you used ground truth for all the models and in that case, if we look research paper, some models are unsupervised. Based on your code explanation, validation loss is not included in training ? Right ? for some models ?
In second case, you used synthetic method for validation. Sample dataset is used during validation is train dataset right ? Then how to understand which model is unsupervised ? Since all model can generate results on Synthetic Method ?
In practice, how you explain, through code, or training process, or testing process or using Synthetic Method that models used are unsupervised or sup-revised Learning ?
Describe the solution you'd like
Their must be proper explanation to clear confusion.
Until the latest anomalib version, v0.7.0, all the models are one-class classifiers. So the models do not need any anomalous images during training. If available, however, they are used in validation and test to measure the performance of the models.
If the dataset does not contain any anomalous images, synthetic anomaly generation method samples some normal images from training set, and creates anomalous images to add to the validation set. Note that these images are currently not used for validation, not training. So effectively, any model in anomalib can use this approach even though they are one-class methods.
As mentioned above, anomalib currently does not have any supervised model. With v1, we are adding zero-few shot models such as WinCLIP #1575. We therefore decided to add LearningType enum to models to ensure which learning type a model is trained. See this PR #1598
Hope this clarifies your questions. Feel free to ask more questions if this is not helpful
What is the motivation for this task?
In first case, you used ground truth for all the models and in that case, if we look research paper, some models are unsupervised. Based on your code explanation, validation loss is not included in training ? Right ? for some models ?
Describe the solution you'd like
Their must be proper explanation to clear confusion.
Additional context
No response