openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.68k stars 654 forks source link

Custom Dataset with different image size (improve results) #358

Closed shrinand1996 closed 2 years ago

shrinand1996 commented 2 years ago

I have made a dataset in the mvtec format for steel. The original image size is 4190x3040 px. If i resize the image, i loose features and hence cant detect small defects. If i train padim/patchcore with 224 image size, the results are poor: image_AUROC 1.0 image_F1Score 0.4705882668495178 pixel_AUROC 0.8873115181922913 pixel_F1Score 0.00214585498906672

How can i make the model's predictions more accurate? I have around 30 images in my train dataset and 10 in testing dataset. Will increasing the images help in increasing accuracy? But the defects are synthetic (no change in scaling,lighting etc).. a simple image subtraction returns the mask..

After i observed the results, small defects are missed most often. Can someone point me in the right direction?

Possible changes could be changing image size, introduce tiling etc

djdameln commented 2 years ago

Hi, adding more images to your training set is generally a good way to improve performance and prevent overfitting. Most models in anomalib work fine on small datasets, but adding more representative training images never hurts. It also depends a bit on the algorithm you are using. Feature extraction based algorithms such as PADIM and PatchCore generally require less data than algorithms that fine-tune the neural network weights (e.g. STFPM, DRAEM). You could experiment with different algorithms to see which one works best for your dataset.

Given the large resolution of your input images and the fact that the images contain small anomalies, I would suggest to try using tiling. Anomalib includes a tiling algorithm that is compatible with most of the models. Tiling can be activated by setting the dataset.tiling.apply parameter in the config.yaml to true. The tile size and stride can be configured from the dataset.tiling section of the config. When tiling is activated, the images will first be resized to the image size specified in dataset.image_size, and then subdivided in tiles based on the specified tiling parameters. Please let us know if you have any additional questions or if you run into any problems with tiling!

shrinand1996 commented 2 years ago

Can you provide some details about how to implement tiling? Like what should be the tile size, stride etc? And how to do inference ? Will inference utilize tiling also? And if so,how? Furthermore, how will it impact the inference speed?