openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.4k stars 615 forks source link

[Bug]: Datamodule transforms are not applied to visualization #2121

Open alsmeirelles opened 3 weeks ago

alsmeirelles commented 3 weeks ago

Describe the bug

When you set transformations in a datamodule, the visualization after testing does not reflect the transformations as the original image is always read from file (in class ImageVisualizer, line 143). So, if you apply a rotation, for example, the mask displayed has the rotation used but the original image does not, which mismatches the mask.

I'm not sure if this behaviour also interferes with metrics calculations.

UPDATE: yes, this issue affects pixel metrics calculation

I applied a workaround/solution to the _visualize_batch method, commenting out the read_image and cv2 resize and using the image in the batch as follows: image = batch["image"][i].permute(1,2,0).numpy()

Dataset

Other (please specify in the text field below)

Folder dataset

Model

Other (please specify in the field below)

EfficiantAD but all models are affected

Steps to reproduce the behavior

Anomalib version 1.0.1 Dataset: any Model: any

Just run any test through engine test method and check the images generated in the results folder

OS information

OS information:

Expected behavior

The image grid visualization should have matching images, with the masks corresponding to the input image.

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

No response

Configuration YAML

I'm not using a YAML config. All arguments are passed through the API calls.

Logs

There are no generated logs.

Code of Conduct

alexriedel1 commented 3 weeks ago

For this you would need to denormalize the image before displaying. However you cannot tell exactly whether the image was normalized or not..

UPDATE: yes, this issue affects pixel metrics calculation

How and where?

alsmeirelles commented 2 weeks ago

For this you would need to denormalize the image before displaying. However you cannot tell exactly whether the image was normalized or not..

Using the image in the batch seems to work fine, is there any reason not to do this?

UPDATE: yes, this issue affects pixel metrics calculation

How and where?

The engine object sets the callbacks.

The transformations callback is processed before the metrics callbacks, and, after transformations, the batch goes to metrics. Pixel F1 and AUROC metrics use the rotated masks and the image contained in the ImageResult object, so, if the image and corresponding mask don't correspond, the metrics values will be incorrect.

alexriedel1 commented 2 weeks ago

Images go to transforms, then through the model, then to metrics, then to visualizer.

The image masks are transformed too: https://github.com/openvinotoolkit/anomalib/blob/56843d2671977d07ad228e6e5d870bf7f240cf59/src/anomalib/data/base/dataset.py#L182

and the metrics are calculated on the before transformed masks: https://github.com/openvinotoolkit/anomalib/blob/56843d2671977d07ad228e6e5d870bf7f240cf59/src/anomalib/callbacks/metrics.py#L175-L186

alexriedel1 commented 2 weeks ago

Using the image in the batch seems to work fine, is there any reason not to do this?

For EfficientAD it might be working fine, because there is no imagenet normalization in the pre-processing transforms used. If you use a model that has imagenet pre-processing you would need to denormalize the images before displaying. Try it with Padim and you will see what I'm talking about