PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.59k stars 2.86k forks source link

onnx infer.py not worked ; request ORT provider #7034

Open MyraBaba opened 1 year ago

MyraBaba commented 1 year ago

问题确认 Search before asking

Bug组件 Bug Component

No response

Bug描述 Describe the Bug

CUDA_VISIBLE_DEVICES=0 python3 deploy/third_engine/onnx/infer.py --infer_cfg output_inference/ppyoloe_crn_l_80e_sliced_visdrone_640_025/infer_cfg.yml --onnx_file sliced_visdrone.onnx --image_file demo/000000014439.jpg

"onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

复现环境 Environment

linux paddle 2.2 onnxruntime-gpu

Bug描述确认 Bug description confirmation

是否愿意提交PR? Are you willing to submit a PR?

MyraBaba commented 1 year ago

change to infer.py to :

predictor = InferenceSession(FLAGS.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

at lien 144

now complains gone.

But there is a new one:

CUDA_VISIBLE_DEVICES=0 python3 deploy/third_engine/onnx/infer.py --infer_cfg output_inference/ppyoloe_crn_l_80e_sliced_visdrone_640_025/infer_cfg.yml --onnx_file sliced_visdrone.onnx --image_file demo/000000014439.jpg 2022-09-27 07:45:57.157955673 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:566 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met. ----------- Model Configuration ----------- Model Arch: YOLO Transform Order: --transform op: Resize --transform op: Permute

Traceback (most recent call last): File "deploy/third_engine/onnx/infer.py", line 148, in predict_image(infer_config, predictor, img_list) File "deploy/third_engine/onnx/infer.py", line 126, in predict_image outputs = predictor.run(output_names=None, input_feed=inputs) File "/usr/local/python3.7.0/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(uint8)) , expected: (tensor(float))

MyraBaba commented 1 year ago

changed infer.py line 125 to 👍 inputs['image'] = inputs['image'].astype(np.float32)

now its inferring .

But I didnt see the the class output ? it gives the bbox only??

lyuwenyu commented 1 year ago

you can print all outputs and check the output shape

MyraBaba commented 1 year ago

shape is 6

where s the : labels & confidence score ?

MyraBaba commented 1 year ago

Is 1st one is label and the second one is the score ?

Screen Shot 2022-09-27 at 11 32 27