microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.13k stars 2.85k forks source link

TensorRT input: 717 has no shape specified. #10443

Open quanliu1991 opened 2 years ago

quanliu1991 commented 2 years ago

Describe the bug Use torch.onnx.export() had coverted faster_rcnn_R_50_C4_1x.yaml file of Detectron2 FasterRCNN model to model2with12.onnx, but when sess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers) ,an TensorRT input: 717 has no shape specified. error occurred. when ep is Tensorrt I try to use the onnx a new error occurs:

System information

To Reproduce code show as below:

providers = [
                ('TensorrtExecutionProvider', {
                    'device_id': 0,
                })]
sess_opt = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers)

image=np.random.randint(1, 255, size=(3, 800, 1202), dtype=np.nint8)
sess.run([sess.get_outputs()[0].name], {sess.get_inputs()[0].name: image})

The following error occurs in onnxruntime.InferenceSession:

2022-01-30 19:02:16.145065205 [E:onnxruntime:, inference_session.cc:1448 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:925 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 717 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

run symbolic_shape_infer have an error occurred.

python symbolic_shape_infer.py --input ./model2with12.onnx  --output ./out_model2with12.onnx --auto_merge --verbose 3

I don't know how to solve this kind of problem,I expected the onnx model to work on ep:Tensorrt.

Expected behavior A clear and concise description of what you expected to happen.

Additional context model2with12.onnx download link: https://drive.google.com/file/d/1_egymUZukkjzNfNDSVYIzLGpGRBfuIRQ/view?usp=sharing input image ndarray info: shape is [3,800,1202] dtype is unit8

garymm commented 2 years ago

@quanliu1991 mentioned in #10399 that this error does not arise for the CUDA EP but does for TensorRT, so I don't think this is a converter issue.

faxu commented 2 years ago

cc @jywu-msft for TRT EP

stale[bot] commented 2 years ago

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

13572320829 commented 2 years ago

I deal this problem successfully. you must convert your onnx model by fllow.

  1. pip install onnxsim
  2. onnxsim input_onnx_model output_onnx_model then you can load the new model successfully.
876399730 commented 1 year ago

i met the same question, when i choose cuda it can works but tensor can not work,it seems not convert question ,if you solve it ,please relpy me ,thanks

quanliu1991 commented 1 year ago

@876399730 You can try pip install onnxsim onnxsim input_onnx_model output_onnx_model

minhhoangho commented 5 months ago
image

Hi @quanliu1991, I get this issue when initializing the model after converting it to onnx.

I follow script onnx in mmyolo repo (https://github.com/open-mmlab/mmyolo/blob/main/projects/easydeploy/tools/export_onnx.py)



But when I load it using onnxruntime-gpu, it raises error

[E:onnxruntime:, inference_session.cc:1981 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:2191 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: /TopK_output_1 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

But when I tried to load using onnxruntime only (cpu), it works normally

Could you please help me to solve this issue?

I also tried to use onnxsim , but error still happens
Thank you so much

Python: 3.8
Dependencies info:
- onnxruntime: 1.17.1
- onnxruntime-gpu: 1.17.1
- tensorrt: 8.6.1.post1
- torch: 2.0.1
Cuda info:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
Masrur02 commented 3 weeks ago

Hi, @minhhoangho Have you solved the issue?