Traceback (most recent call last):
File "/home/root/workspace/tvm-vai/test/run_mxnet_resnet_18.py", line 152, in
run(file_path, shape_dict, iterations)
File "/home/root/workspace/tvm-vai/test/run_mxnet_resnet_18.py", line 95, in run
lib = tvm.runtime.load_module(file_path)
File "/home/root/workspace/tvm-vai/tvm/python/tvm/runtime/module.py", line 613, in load_module
return _ffi_api.ModuleLoadFromFile(path, fmt)
File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.call
File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
Hi, relating to this issue: https://github.com/Xilinx/Vitis-AI/issues/721 I also would like to ask a similar quation. I made a "tvm_dpu_cpu.so" using onnx model: https://tvm.apache.org/docs/how_to/compile_models/from_onnx.html and prepared pyxir and tvm in Kria-kv260. But, when running inference, I encontered the same problem mentioned in the above link. Are there any way to escape from this problem? Best regards.