microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.25k stars 2.87k forks source link

[TVM] Exception during initialization #13572

Open LiuPeiqiCN opened 1 year ago

LiuPeiqiCN commented 1 year ago

Describe the issue

Code:

Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_Tvm(sessionOptions, ""));

Exception:

Exception during initialization: D:\CMake\onnxruntime\onnxruntime\onnxruntime\core\providers\tvm\tvm_api.cc:50 
onnxruntime::tvm::TVMCompile compile != nullptr was false. Unable to retrieve 'tvm_onnx_import_and_compile'.


Build:

.\build.bat --config Release --skip_tests --build_shared_lib --parallel
 --update --build --use_tvm --llvm_config "D:\vcpkgs\LLVM\bin\llvm-config.exe" --cmake_extra_defines 
BUILD_TESTING=OFF CMAKE_INSTALL_PREFIX="install"  LLVM_DIR="D:\vcpkgs\LLVM\lib\cmake\llvm"  
onnxruntime_BUILD_UNIT_TESTS=OFF --cmake_generator "Visual Studio 17 2022"

How to use TVM with c++? Any example/document?

To reproduce

Exception during initialization when use TVM execution provider.

Urgency

No response

Platform

Windows

OS Version

win11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.13.1

ONNX Runtime API

C++

Architecture

X64

Execution Provider

TVM

Execution Provider Library Version

No response

chengchen666 commented 1 year ago

I met same error message. I passed --use_cuda and --enable_training, and they caused https://github.com/microsoft/onnxruntime/blob/main/tools/ci_build/build.py#L1963 and https://github.com/microsoft/onnxruntime/blob/main/setup.py#L523, and eventually package_name is not "onnxruntime-tvm": https://github.com/microsoft/onnxruntime/blob/main/setup.py#L597.

I solved it by remove --use_cuda and --enable_training arguments. Not sure what caused your error. But basiclly, you need to make sure the package_name is "onnxruntime-tvm", which means code runs to https://github.com/microsoft/onnxruntime/blob/main/setup.py#L598.

yfirecanfly commented 9 months ago

Hello, does the Python interface of onnxruntime-tvm support cuda, I use the target==cuda,Error reporting