openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.73k stars 666 forks source link

Empty prediction mask and heatmap using WinCLIP in CLASSIFICATION task #2186

Open adghin opened 3 months ago

adghin commented 3 months ago

I'm not sure if this is a bug or something I'm missing, but with WinCLIP I can't get pixel metrics if the TaskTypeis set to CLASSIFICATION. For example, say we have a custom dataset with normal images for training and testing (although we don't need the training ones except if we use them for the few-shot configuration) and abnormal images for testing; suppose we don't have ground truth segmentation masks, so the only available task is TaskType.CLASSIFICATION. Then I only get image-wise metrics even if the Engine parameters pixel_metricsand task=TaskType.SEGMENTATION are set. Moreover, shouldn't we get at least the heatmap and predicted mask of test images? For me the predicted mask is empty and the heatmap is one-coloured with all green or other purple (like I've seen in other issues regarding strange heatmaps).

Any idea what this could be about?

alexriedel1 commented 3 months ago

The mask can only be created if the model learns a pixel threshold (e.g. above which pixel-anomaly-value can a pixel be considered anomalous). This pixel threshold is learned via anomalous samples after training. So you will get no anomaly prediction mask without ground truth masks. But you can try to use synthetically generated anomalies if you can't create segmentation ground truths.

The one-coloured green heatmap however seems to be a bug in either your setup or the model. Please share some more infos on your training and inferencing process.

adghin commented 3 months ago

The one-coloured green heatmap however seems to be a bug in either your setup or the model. Please share some more infos on your training and inferencing process.

Hi @alexriedel1 , here is the code for the datamodule creation and for the testing.

Datamodule creation:

test_datamodule = Folder( name="my_project", root=dataset_root, normal_dir="train_normal", abnormal_dir="test_abnormal", normal_test_dir="test_normal", task=TaskType.CLASSIFICATION, image_size=(240,240) )

Running tests in zero-shot/few-shot configuration:

model = WinClip() ### or WinClip(k_shot=..., few_shot_source=...) if in few-shot configuration engine = Engine() engine.test(test_datamodule, model=model)

So the model runs validation in order to collect normalization and thresholding and then tests on the test dataloader, but heatmaps are empty. Am I missing something?

alexriedel1 commented 3 months ago

Whats the console out of the script? Can you show some of the output images?

adghin commented 3 months ago

@alexriedel1 here is the console output (executed a 5-shot test):

Immagine 2024-07-12 160355

And an example of a bad image:

bad_image

In all the other images the heatmaps does not change, they are all equally empty.