openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.69k stars 655 forks source link

Metrics redesign #2326

Open djdameln opened 1 week ago

djdameln commented 1 week ago

๐Ÿ“ Description

Some examples:

from anomalib.models import Padim
from anomalib.metrics import AnomalibMetric, AUROC, F1Score
from anomalib.metrics.base import create_anomalib_metric

# A model-specific default set of metrics is provided by anomalib:
# >>> Padim.default_evaluator()
# Evaluator(
#   (val_metrics): ModuleList()
#   (test_metrics): ModuleList(
#     (0): AUROC()
#     (1): F1Score()
#     (2): AUROC()
#     (3): F1Score()
#   )
# )

# But we can also pass our own set of metrics:
image_f1_score = F1Score(fields=["pred_label", "gt_label"])
image_auroc = AUROC(fields=["pred_label", "gt_label"])
Padim(metrics=[image_f1_score, image_auroc])

# When passing multiple metrics of the same type, we need to provide a prefix:
image_f1_score = F1Score(fields=["pred_label", "gt_label"], prefix="image_")
pixel_f1_score = F1Score(fields=["pred_mask", "gt_mask"], prefix="pixel_")
Padim(metrics=[image_f1_score, pixel_f1_score])

# We can also use torchmetrics classes that are not available in Anomalib:
# Option 1: create a custom metric class
class AnomalibAccuracy(AnomalibMetric, Accuracy):
    pass
image_accuracy = AnomalibAccuracy(fields=["pred_label", "gt_label"])

# Option 2: use create_anomalib_metric util function
from torchmetrics import Accuracy
AnomalibAccuracy = create_anomalib_metric(Accuracy)
image_accuracy = AnomalibAccuracy(fields=["pred_label", "gt_label"])

Other changes:

โœจ Changes

Select what type of change your PR is:

โœ… Checklist

Before you submit your pull request, please make sure you have completed the following steps:

For more information about code review checklists, see the Code Review Checklist.

review-notebook-app[bot] commented 1 week ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

codecov[bot] commented 1 week ago

Codecov Report

Attention: Patch coverage is 94.67456% with 9 lines in your changes missing coverage. Please review.

Please upload report for BASE (feature/design-simplifications@8543e24). Learn more about missing BASE report.

Files with missing lines Patch % Lines
.../anomalib/models/components/base/anomaly_module.py 84.00% 4 Missing :warning:
src/anomalib/metrics/base.py 85.00% 3 Missing :warning:
src/anomalib/metrics/evaluator.py 95.00% 2 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## feature/design-simplifications #2326 +/- ## ================================================================= Coverage ? 80.76% ================================================================= Files ? 272 Lines ? 11212 Branches ? 0 ================================================================= Hits ? 9055 Misses ? 2157 Partials ? 0 ``` | [Flag](https://app.codecov.io/gh/openvinotoolkit/anomalib/pull/2326/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=openvinotoolkit) | Coverage ฮ” | | |---|---|---| | [](https://app.codecov.io/gh/openvinotoolkit/anomalib/pull/2326/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=openvinotoolkit) | `80.76% <94.67%> (?)` | | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=openvinotoolkit#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

samet-akcay commented 6 days ago

@abc-125 @alexriedel1 @blaz-r @jpcbertoldo, if/when you have some time, would you like to review this PR? It introduces some major changes. It would be great to get your perspective.

blaz-r commented 3 days ago

I'll check this tomorrow in the morning.

samet-akcay commented 2 days ago

I think this looks great ๐Ÿ˜„. I added some small comments.

I do have two main things to point out:

  1. Similar as the one from Ashwin - is there no way to pass the metrics just by name now (CLI, config)?
  2. About the default evaluator inside anomaly_module. This change might not be completely compatible with current configs due to early stopping and behavior of existing validation (more details about this in the code comment).

@blaz-r, thanks for your review. This PR is part of a greater initiative, which might potentially break things again. That's why we started to think of this as anomalib v2. Here is a proposal showing how Anomalib base model would look like

Anomalib - Aux Operations Design.pptx

blaz-r commented 1 day ago

Cool, thanks for the shared info.

jpcbertoldo commented 1 day ago

Just on a side note, if you add AUPIMO to this, i would suggest a small change in the design.

Remove the option return_average that we added in the last minute, and instead create another classe AverageAUPIMO(AUPIMO).

It would have something like

def compute(...):
     _, aupimo_result = super().compute(...)
     # normal images have NaN AUPIMO scores
     is_nan = torch.isnan(aupimo_result.aupimos)
     return aupimo_result.aupimos[~is_nan].mean()

which is currently done in an if inside AUPIMO.