microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.78k stars 2.94k forks source link

onnxruntime-tvm #18955

Open yfirecanfly opened 11 months ago

yfirecanfly commented 11 months ago

Describe the documentation issue

When I used Onnxruntime tvm for inference based on backend GPU and ROCM using a precompiled model, the following error occurred

Check failed: (itr != physical_devices.end()) is false: Unable to find a physical device (from among the 1 given) to match the virtual device with device type 2

The code follows the example of resnet50, with only the modification of tartget='cuda '

compiled_vm_exec = compile_virtual_machine(onnx_model, target_str="cuda")
      so_folder = serialize_virtual_machine(compiled_vm_exec)
      provider_options = [dict(
          executor="vm",
          so_folder=so_folder,
      )]'

# so = onnxruntime.SessionOptions()

# so.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL

inference_session1 = onnxruntime.InferenceSession(
    onnx_model.SerializeToString(), 
    # sess_options=so,
    providers=["TvmExecutionProvider"],
    provider_options=provider_options,
    )

Looking forward to your help!!!!

Page / URL

No response

yfirecanfly commented 10 months ago

The document only shows LLVM. Does it currently support CPU