openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.44k stars 621 forks source link

[Task]: Improving Anomalib's thresholding mechanism to enable a fully unsupervised workflow #986

Closed djdameln closed 3 months ago

djdameln commented 1 year ago

What is the motivation for this task?

Background

Anomaly Detection

Anomaly detection is the process of identifying data points, events, or observations within a dataset that significantly deviate from the normal or expected behavior. Anomalies may be caused by a variety of factors, including failures, fraud, or unusual behavior.

Anomaly detection is an essential task in numerous industries, including cybersecurity, finance, healthcare, and manufacturing. It can be used to detect fraud in financial transactions, to identify anomalies in medical data for early disease diagnosis, to detect flaws in manufacturing processes, and to monitor traffic for security threats.

Anomalib

Anomalib is a deep learning library that aims to collect the best anomaly detection algorithms for testing on both public and private datasets. Anomalib offers several ready-to-use implementations of anomaly detection algorithms described in recent research, as well as a set of tools that make it easier to build and use custom models. The library has a strong focus on image-based anomaly detection, where the goal of the algorithm is to identify anomalous images or anomalous pixel regions within images in a dataset.

The thresholding problem

Anomaly detection models in Anomalib are trained only on normal images. During inference, the models are tasked with distinguishing anomalous samples from normal samples. The task is similar to a classical binary classification problem, but instead of generating a class label and a confidence score, Anomalib models generate an anomaly score, which quantifies the distance of the sample to the distribution of normal samples seen during training. The range of possible anomaly score values is unbounded and may differ widely between models and/or datasets, which makes it challenging to set a good threshold for mapping the raw anomaly scores to a binary class label (normal vs. anomalous).

Describe the solution you'd like

Desired solution

Anomalib currently has an adaptive thresholding mechanism in place which aims to address the thresholding problem. The adaptive thresholding mechanism computes the F1 score over a validation set for a range of thresholds. The final threshold value is the threshold value that results in the highest F1 score. A major drawback of this approach is that the validation set is required to contain anomalous samples, which might not always be available in real world anomaly detection problems.

The goal of this hackathon is to design a fully unsupervised thresholding mechanism that does not rely on anomalous samples.

Possible approaches

Additional context

No response

leonardloh commented 1 year ago

@samet-akcay the team 2 would like to tackle this problem

samet-akcay commented 1 year ago

Thanks @leonardloh, I'm assigning the task to you then.

holzweber commented 7 months ago

Any updates on this task? --- Are you guys still working on different thresholding mechanisms? 😄

samet-akcay commented 7 months ago

@holzweber, there are bunch of related PRs that are still open. We need to refactor them to our new v1 API, after which they could be merged.

samet-akcay commented 7 months ago

They basically needs to follow this base interface, similar to adaptive f1, and manual thresholds.