Traceback (most recent call last):
File "../../../../../common/rknn_converter/rknn_convert.py", line 182, in <module>
convert(config_dict, args)
File "../../../../../common/rknn_converter/rknn_convert.py", line 134, in convert
model_runer = Excuter(framework_excute_info)
File "/home/abc/xxx/rknn_model_zoo/common/framework_excuter/excuter.py", line 46, in __init__
model_container = ONNX_model_container(_info['model'])
File "/home/abc/xxx/rknn_model_zoo/common/framework_excuter/onnx_excute.py", line 8, in __init__
self.sess = rt.InferenceSession(model_path)
File "/home/ibuddy/anaconda3/envs/rknn/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/abc/anaconda3/envs/rknn/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 364, in _create_inference_session
"onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers))
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
我这边根据错误做了providers 指明,运行正常:
2022-04-12 10:57:30.122493106 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.
For output-0 : rknn(asymmetric_affine-u8) VS onnx cos_similarity[0.9992702007293701]
For output-1 : rknn(asymmetric_affine-u8) VS onnx cos_similarity[0.9993593692779541]
For output-2 : rknn(asymmetric_affine-u8) VS onnx cos_similarity[0.9994618892669678]
---> Eval performance
========================================================================
Performance
========================================================================
Average inference Time(us): 33666.0
FPS: 29.70
========================================================================
目前的onnxruntime 会出现以下错误:
我这边根据错误做了providers 指明,运行正常: