apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators
https://tvm.apache.org/
Apache License 2.0
11.84k stars 3.48k forks source link

[BYOC][DNNL] Check failed: (pf != nullptr) is false: no such function in module: tvmgen_default_dnnl_main_0 #13210

Closed yuwenjun1988 closed 2 years ago

yuwenjun1988 commented 2 years ago

Expected behavior No Crash

Actual behavior Crash Stack

Traceback (most recent call last): File "/mnt/e/tvm_mlir_learn-main/test/dnnl_test.py", line 190, in run_and_verify_func(get_graph(relay.nn.relu), run_module=True) File "/mnt/e/tvm_mlir_learn-main/test/dnnl_test.py", line 173, in run_and_verify_func run_and_verify( File "/mnt/e/tvm_mlir_learn-main/test/dnnl_test.py", line 143, in run_and_verify func = relay.create_executor( File "/mnt/e/code/tvm/python/tvm/relay/backend/interpreter.py", line 171, in evaluate return self._make_executor() File "/mnt/e/code/tvm/python/tvm/relay/build_module.py", line 520, in _make_executor gmodule = _graph_executor.GraphModule(mod"default") File "/mnt/e/code/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in call raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): 5: TVMFuncCall 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::GraphExecutorFactory::GetFunction(std::cxx11::basic_string<char, std::char_traits, std::allocator > const&, tvm::runtime::ObjectPtr const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue)#1}> >::Call(tvm::runtime::PackedFuncObj const, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) 3: tvm::runtime::GraphExecutorFactory::ExecutorCreate(std::vector<DLDevice, std::allocator > const&) 2: tvm::runtime::GraphExecutor::Init(std::cxx11::basic_string<char, std::char_traits, std::allocator > const&, tvm::runtime::Module, std::vector<DLDevice, std::allocator > const&, tvm::runtime::PackedFunc) 1: tvm::runtime::GraphExecutor::SetupOpExecs() 0: tvm::runtime::GraphExecutor::CreateTVMOp(tvm::runtime::TVMOpParam const&, std::vector<DLTensor, std::allocator > const&) File "/mnt/e/code/tvm/src/runtime/graph_executor/graph_executor.cc", line 563 TVMError:

An error occurred during the execution of TVM. For more information, please see: https://tvm.apache.org/docs/errors.html

Environment Ubuntu 18.04 TVM 0.11.dev0 Any environment details, such as: Operating System, TVM version, etc

Steps to reproduce

run tvm/tests/python/contrib/test_dnnl.py

yangulei commented 2 years ago

I build the latest commit 5c9066d816408bd2858c9758b0865ca08112c78f, run python tests/python/contrib/test_dnnl.py and all the tests passed.

I noticed you use /mnt/e/tvm_mlir_learn-main/test/dnnl_test.py instead of tests/python/contrib/test_dnnl.py. More details of your environment like commit id, config.cmake, oneDNN installation and test codes you used could help us to reproduce the error.

masahi commented 2 years ago

Closing assuming this is not an issue of TVM.

yuwenjun1988 commented 2 years ago

Please in config.cmake set(USE_DNNL C_SRC) and try on it,and see another issue https://github.com/apache/tvm/issues/13222