Open quanliu1991 opened 2 years ago
@quanliu1991 mentioned in #10399 that this error does not arise for the CUDA EP but does for TensorRT, so I don't think this is a converter issue.
cc @jywu-msft for TRT EP
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
I deal this problem successfully. you must convert your onnx model by fllow.
i met the same question, when i choose cuda it can works but tensor can not work,it seems not convert question ,if you solve it ,please relpy me ,thanks
@876399730 You can try pip install onnxsim onnxsim input_onnx_model output_onnx_model
Hi @quanliu1991, I get this issue when initializing the model after converting it to onnx.
I follow script onnx in mmyolo repo (https://github.com/open-mmlab/mmyolo/blob/main/projects/easydeploy/tools/export_onnx.py)
python projects/easydeploy/tools/export_onnx.py \
$config_file \
$chkp_file \
--work-dir $out_dir \
--img-size 640 640 \
--batch 1 \
--device cuda:0 \
--simplify \
--opset 11 \
--pre-topk 1000 \
--keep-topk 100 \
--iou-threshold 0.65 \
--score-threshold 0.25
But when I load it using onnxruntime-gpu, it raises error
[E:onnxruntime:, inference_session.cc:1981 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:2191 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: /TopK_output_1 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs
But when I tried to load using onnxruntime only (cpu), it works normally
Could you please help me to solve this issue?
I also tried to use onnxsim , but error still happens
Thank you so much
Python: 3.8
Dependencies info:
- onnxruntime: 1.17.1
- onnxruntime-gpu: 1.17.1
- tensorrt: 8.6.1.post1
- torch: 2.0.1
Cuda info:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
Hi, @minhhoangho Have you solved the issue?
Describe the bug Use torch.onnx.export() had coverted faster_rcnn_R_50_C4_1x.yaml file of Detectron2 FasterRCNN model to model2with12.onnx, but when sess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers) ,an
TensorRT input: 717 has no shape specified.
error occurred. when ep is Tensorrt I try to use the onnx a new error occurs:System information
To Reproduce code show as below:
The following error occurs in onnxruntime.InferenceSession:
run symbolic_shape_infer have an error occurred.
I don't know how to solve this kind of problem,I expected the onnx model to work on ep:Tensorrt.
Expected behavior A clear and concise description of what you expected to happen.
Additional context model2with12.onnx download link: https://drive.google.com/file/d/1_egymUZukkjzNfNDSVYIzLGpGRBfuIRQ/view?usp=sharing input image ndarray info: shape is [3,800,1202] dtype is unit8