An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
When I fit() and then test() a model on anomalib, then everything seems to work fine.
I can look at the images in the "results" folder from the test run and their heat map looks great:
However, when I then export the model to ONNX and load it via OpenVINOInferencer, then predict() on the same image, the result does not look as great any more:
The model seem to work great in the test run, so the problem might be way I run inferecing.
Note that the same "bad" heat map is produced if I export and load the model to OpenVINO or ONNX format.
When I compare both screenshots then I can see that the first one from test run has quadratic images, as I train with 256x256. However, the second one seems to be rectangular, as the original image shape. Do I have to manually resize the image before I run prediction on it?
Dataset
Other (please specify in the text field below)
Model
N/A
Steps to reproduce the behavior
First I train the model on a custom dataset:
# Create the model
model = Padim()
engine = Engine(
max_epochs=100,
task=task_type,
pixel_metrics="AUROC",
accelerator="gpu",
devices=-1,
callbacks=callbacks,
)
engine.fit(datamodule=datamodule, model=model)
engine.test(datamodule=datamodule, model=model, ckpt_path=engine.trainer.checkpoint_callback.best_model_path)
# Export the model
engine.export(model=model, export_type=ExportType.OPENVINO, export_root=root_folder)
engine.export(model=model, export_type=ExportType.ONNX, export_root=root_folder)
Then in another script I load the trained model:
task_type = TaskType.CLASSIFICATION
root_folder = "/data/scratch/mkw-anomalib"
img_path = os.path.join(root_folder, "images")
sample_img = os.path.join("abnormal", "easy00000413.jpg")
# load the model
inferencer = OpenVINOInferencer(
path=os.path.join(root_folder, "weights/openvino/model.bin"),
metadata=os.path.join(root_folder, "weights/openvino/metadata.json"),
device="CPU",
task=task_type,
)
# run the inference
result = inferencer.predict(os.path.join(img_path, sample_img))
print(result.pred_label, result.pred_score)
The last line prints:
LabelName.ABNORMAL 1.0
OS information
OS: Ubuntu 22.04
Python version: 3.10.12
Anomalib version: 1.1,0
PyTorch version: 2.2
CUDA/cuDNN version: 12.2
GPU models and configuration: 4x Nvidia RTX A6000
Any other relevant information: I'm using a custom dataset
Expected behavior
I would expect that the same image would lead to the same heat map in both the test run and during prediction.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Where to find that?
Logs
-
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Describe the bug
When I
fit()
and thentest()
a model on anomalib, then everything seems to work fine. I can look at the images in the "results" folder from the test run and their heat map looks great:However, when I then export the model to ONNX and load it via
OpenVINOInferencer
, thenpredict()
on the same image, the result does not look as great any more:The model seem to work great in the test run, so the problem might be way I run inferecing. Note that the same "bad" heat map is produced if I export and load the model to OpenVINO or ONNX format.
When I compare both screenshots then I can see that the first one from test run has quadratic images, as I train with 256x256. However, the second one seems to be rectangular, as the original image shape. Do I have to manually resize the image before I run prediction on it?
Dataset
Other (please specify in the text field below)
Model
N/A
Steps to reproduce the behavior
First I train the model on a custom dataset:
Then in another script I load the trained model:
The last line prints:
OS information
Expected behavior
I would expect that the same image would lead to the same heat map in both the test run and during prediction.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct