openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
6.82k stars 2.17k forks source link

Using"GPU", openvino also doesn't work #25665

Open MMYY-yy opened 1 month ago

MMYY-yy commented 1 month ago

OpenVINO Version

2024.2.0

Operating System

Windows System

Device used for inference

GPU

Framework

PyTorch

Model used

Padim

Issue description

inferencer = OpenVINOInferencer( path=openvino_model_path, # Path to the OpenVINO IR model. metadata=metadata_path, # Path to the metadata file. task=TaskType.SEGMENTATION, device="GPU", # We would like to run it on an Intel CPU. )

RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223:

Check 'TRShape::merge_into(output_shape, in_copy)' failed at src/core/shape_inference/include\concat_shape_inference.hpp:49: While validating node 'opset1::Concat concat:/model/Concat_5 () -> ()' with friendly_name 'concat:/model/Concat_5': Shape inference input shapes {[0,64,64,64],[0,0,0,0],[0,0,0,0]} Argument shapes are inconsistent; they must have the same rank, and must have equal dimension everywhere except on the concatenation axis (axis 1).

Step-by-step reproduction

core = ov.Core() print(core.available_devices)

['CPU', 'GPU.0', 'GPU.1']

Relevant log output

No response

Issue submission checklist

andrei-kochin commented 1 month ago

@MMYY-yy could you please clarify which model was used? is it https://github.com/Lornatang/PaDiM ?

MMYY-yy commented 1 month ago

@MMYY-yy could you please clarify which model was used? is it https://github.com/Lornatang/PaDiM ? @MMYY-yy 您能澄清一下使用了哪种模型吗?是 https://github.com/Lornatang/PaDiM 吗?

Thank you very much for your reply What I am using is: model: class_path: anomalib.models.Padim init_args: layers:

metrics: pixel: AUROC

mvafin commented 1 month ago

@MMYY-yy What version of anomalib and pytorch are you using? Could you provide a reproducing script?

MMYY-yy commented 1 month ago

@MMYY-yy您使用的 anomalib 和 pytorch 是什么版本?您能提供一个复现脚本吗?

  1. Thank you very much for your reply. The following is the detailed content I used: anomalib 1.2.0.dev0 onnx 1.16.1 onnxruntime 1.18.1 openvino 2024.2.0 openvino-telemetry 2024.1.0 torch 2.1.2+cu118 torchaudio 2.1.2+cu118 torchmetrics 1.4.0.post0 torchvision 0.16.2+cu118
  1. This is my training script。CUDA can be used during training, Terminal Output:GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores HPU available: False, using: 0 HPUs

from anomalib.data import MVTec from anomalib.models import Padim, Draem, Patchcore, Cfa from anomalib.engine import Engine import os

if name == 'main': os.environ["CUDA_VISIBLE_DEVICES"] = "1"

# Create the datamodule
datamodule = MVTec(
    root="./datasets/MVTec",
    image_size=(512, 512),
    task="segmentation",
    train_batch_size=32,
    eval_batch_size=1,
    num_workers=8,
)

# Setup the datamodule
datamodule.setup()

# Initialize the model
model = Padim()

# Initialize the  engine
engine = Engine()

# Train the model
engine.fit(datamodule=datamodule, model=model)

# Test the model
engine.test(datamodule=datamodule, model=model)
  1. This is the script for loading, exporting, and testing the model: Terminal Output:RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223: Check 'TRShape::broadcast_merge_into(output_shape, input_shapes[1], autob)' failed at src/core/shape_inference/include\eltwise_shape_inference.hpp:28: While validating node 'opset1::Add Add_354 () -> ()' with friendly_name 'Add_354': Argument shapes are inconsistent.

But,when device="AUTO",the code runs successfully, And then the GPU didn't work,The prediction time is very slow

def load_and_export_model(model_path):

# ????
checkpoint = torch.load(model_path)

# ?????
model = Padim()  # ????????????
model.load_state_dict(checkpoint['state_dict'])

# ????
engine = Engine(task=TaskType.SEGMENTATION)

# ?????trainer_arguments??
if not hasattr(model, 'trainer_arguments'):
    model.trainer_arguments = {}

# ?????OpenVINO??
engine.export(
    model=model,
    export_type=ExportType.OPENVINO,
)

# ????????
print(f"Model saved to {engine.trainer.default_root_dir}.")
return engine.trainer.default_root_dir

if __name__ == '__main__':

model = "./results/Padim/MVTec/bottle/v3/weights/lightning/model.ckpt"

load_and_export_model(model)  # ???????

output_path = Path("./results")
openvino_model_path = output_path / "weights" / "openvino" / "model.bin"
metadata_path = output_path / "weights" / "openvino" / "metadata.json"

inferencer = OpenVINOInferencer(
    path=openvino_model_path,  # Path to the OpenVINO IR model.
    metadata=metadata_path,  # Path to the metadata file.
    device="GPU",  # We would like to run it on an Intel CPU.
)

folder_path = "./datasets/test/good1/"  # ????????
test_path = "./output"  #
if not os.path.exists(test_path):
    os.makedirs(test_path)

png_files = [f for f in os.listdir(folder_path) if f.endswith('.png')]

for file_name in png_files:
    image = read_image(path=folder_path + '/' + file_name)

    start_time = time.time()
    predictions = inferencer.predict(image=image)
    end_time = time.time()

    elapsed_time = end_time - start_time
    print(f"Prediction took {elapsed_time:.4f} seconds.")
    print(predictions.pred_score, predictions.pred_label) #

print("Done")