microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.56k stars 2.91k forks source link

raise Exception("Incomplete symbolic shape inference") when running "symbolic_shape_infer.py" #10484

Open kobzaond opened 2 years ago

kobzaond commented 2 years ago

When I convert BERT (pytorch model) to onnx format (without any optimizations) and then try to run the "symbolic_shape_infer.py" script with the obtained onnx model as an input argument, I get the following error: File "symbolic_shape_infer.py", line 2096, in args.guess_output_rank, args.verbose) File "symbolic_shape_infer.py", line 2062, in infer_shapes raise Exception("Incomplete symbolic shape inference") Exception: Incomplete symbolic shape inference

The Bert model is with one classification layer on top (2 classes, random initialization, no fine-tuning). So literally BertForSequenceClassification.from_pretrained('bert-base-uncased'). Note, that inference with the onnx model through onnxruntime works.

zhanghuanrong commented 2 years ago

Thanks for reporting! Could you please share link for the model for further investigation?

kobzaond commented 2 years ago

Here is the link: https://huggingface.co/bert-base-uncased, note, that the model has a single (classification) linear layer on top

stale[bot] commented 2 years ago

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

trajepl commented 2 years ago

Any updates for this issue? I met the same one. @zhanghuanrong

Here is environments: ort-nightly-gpu==1.13.0.dev20221005006 onnx==1.9.0

Same model(https://huggingface.co/bert-base-uncased) when I run preprocess.: python -m onnxruntime.quantization.preprocess --input bert.opt.onnx --output bert.opt.pre_process.onnx

ztqsteve commented 1 year ago

Same issue, none of my models can run successfully by onnxruntime.quantization.preprocess, but they work by onnxruntime inference session.

Hadrien-Cornier commented 1 year ago

same issue ! Cannot convert my onnx model to tensorrt, so I tried to use the symbolic_shape_infer.py script but it raises the Exception: Incomplete symbolic shape inference

Zeng1998 commented 1 year ago

Same issue. And not only for transformer model.

riZZZhik commented 1 year ago

Same issue with transformers and diffusional models

treksis commented 1 year ago

same issue with codeformer upscaler

wangskyone commented 1 year ago

I solved this problem by running optimization first and then quantization, hope this helps Fisrt runpython -m onnxruntime.transformers.optimizer --input=release/class.onnx --output=release/class.ops.onnx Then runpython -m onnxruntime.quantization.preprocess --input release/class.ops.onnx --output release/class.ops-infer.onnx onnx==1.14.1

mazzma12 commented 1 month ago

I solved this problem by running optimization first and then quantization, hope this helps Fisrt runpython -m onnxruntime.transformers.optimizer --input=release/class.onnx --output=release/class.ops.onnx Then runpython -m onnxruntime.quantization.preprocess --input release/class.ops.onnx --output release/class.ops-infer.onnx onnx==1.14.1

This does not work for me on onnx==1.14.1 and version 1.16.1

kabyanil commented 1 month ago

I solved this problem by running optimization first and then quantization, hope this helps Fisrt runpython -m onnxruntime.transformers.optimizer --input=release/class.onnx --output=release/class.ops.onnx Then runpython -m onnxruntime.quantization.preprocess --input release/class.ops.onnx --output release/class.ops-infer.onnx onnx==1.14.1

I tried this out for a custom coded pytorch transformer model. The first command runs, but the second does not. It throws the same error Exception: Incomplete symbolic shape inference.

My onnx versions:

onnx                             1.16.2
onnxruntime                      1.19.2
onnxscript                       0.1.0.dev20240918