Open nakayamarusu opened 6 months ago
@nakayamarusu, do you observe the same behaviour on TorchInferencer
or Engine.predict
method?
@samet-akcay Can I use Intel's built-in GPU when using TorchInferencer? I only have Intel's iRIS Xe. Just to be sure, I used TorchInferencer to perform inference on the CPU, but the results worked fine.
Use CPU
from anomalib.deploy.inferencers import TorchInferencer
from anomalib.data.utils import read_image
import torch
import numpy as np
import cv2
inferencer = TorchInferencer(path=r"C:\anomalib_v1\results\weights\torch\model.pt", device="cpu")
image = read_image(r"C:\anomalib_v1\dataset\bottle\test\broken_large\000.png")
input_img = image.astype(np.float32) / 1.
image_transposed = np.transpose(input_img, (2, 0, 1))
print(image_transposed.shape)
torch_image = torch.from_numpy(image_transposed)
result = inferencer.predict(torch_image)
cv2.imshow("result", cv2.cvtColor(result.heat_map, cv2.COLOR_RGB2BGR))
cv2.waitKey()
Use GPU
from anomalib.deploy.inferencers import TorchInferencer
from anomalib.data.utils import read_image
import torch
import numpy as np
import cv2
inferencer = TorchInferencer(path=r"C:\anomalib_v1\results\weights\torch\model.pt", device="gpu")
image = read_image(r"C:\anomalib_v1\dataset\bottle\test\broken_large\000.png")
input_img = image.astype(np.float32) / 1.
image_transposed = np.transpose(input_img, (2, 0, 1))
print(image_transposed.shape)
torch_image = torch.from_numpy(image_transposed)
result = inferencer.predict(torch_image)
cv2.imshow("result", cv2.cvtColor(result.heat_map, cv2.COLOR_RGB2BGR))
cv2.waitKey()
(anomalib_latest_env) C:\anomalib_v1>python C:\anomalib_v1\original_code\torch_infer.py
Traceback (most recent call last):
File "C:\anomalib_v1\original_code\torch_infer.py", line 13, in <module>
inferencer = TorchInferencer(path=r"C:\anomalib_v1\results\weights\torch\model.pt", device="gpu")
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\anomalib\deploy\inferencers\torch_inferencer.py", line 69, in __init__
self.checkpoint = self._load_checkpoint(path)
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\anomalib\deploy\inferencers\torch_inferencer.py", line 109, in _load_checkpoint
return torch.load(path, map_location=self.device)
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 1014, in load
return _load(opened_zipfile,
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 1422, in _load
result = unpickler.load()
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 1392, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 1366, in load_tensor
wrap_storage=restore_location(storage, location),
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 1299, in restore_location
return default_restore_location(storage, str(map_location))
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 381, in default_restore_location
result = fn(storage, location)
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 274, in _cuda_deserialize
device = validate_cuda_device(location)
File "C:\Users\n-nakayama\AppData\Local\anaconda3\envs\anomalib_latest_env\lib\site-packages\torch\serialization.py", line 258, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Yeah, for GPU on an XPU device, we need to enable the XPU training support. I think this might potentially be supported in v1.2.0
Describe the bug
Problem
Learn images of MVTec bottles using Padim. And after exporting it to a file that can be used in Openvino, I ran the inference in Openvino. You can get the correct heatmap by inferring on the CPU, but it won't work on the GPU. I just changed the device used from CPU to GPU. The GPU uses Intel iRIS Xew, and we have confirmed that OpenVino is compatible.
Dataset
MVTec
Model
PADiM
Steps to reproduce the behavior
Train
The learning process was as follows.
anomalib fit -c configs/model/padim.yaml --data configs/folder_bottle.yaml
▼ padim.yaml
▼ folder_bottle_yaml
Export
The export was performed as follows.
anomalib export --model Padim --export_type OPENVINO --ckpt_path results/Padim/bottle/latest/weights/lightning/model.ckpt
Inference
The inference was made by creating python code as follows.
OS information
OS information:
Expected behavior
Inference result
Heat map when the device used for inference is CPU as shown below
▲ pred_score : 0.4836
▲ pred_score : 0.5605
Heat map when the device used for inference is GPU as shown below
▲ pred_score : 0.0
▲ pred_score : 0.0
Both CPU and GPU inference use the same model. Also, only the device = "CPU" and device = "GPU" parts have been changed. I changed the OpenVino version from 2024.0.0 to 2023.2.0 and tried inference, but it doesn't work
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct