schyun9212 / maskrcnn-benchmark

Converting maskrcnn-benchmark model to TorchScript or ONNX
MIT License
2 stars 0 forks source link

failed to create session with ShapeInferenceError #5

Open schyun9212 opened 4 years ago

schyun9212 commented 4 years ago

🐛 Bug

I exported model to onnx successfully. But failed creating onnx session.

To Reproduce

import onnx
import onnxruntime as ort

TEST_IMAGE_PATH = "./sample.jpg"
MODEL_PATH = "model.onnx"

model = onnx.load(MODEL_PATH)
onnx.checker.check_model(model)

ort_session = ort.InferenceSession(MODEL_PATH) # <-- error occurred here

Expected behavior

Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Warning: ATen was a removed experimental ops. In the future, we may directly reject this operator. Please update your model as soon as possible.
Traceback (most recent call last):
  File "test_onnx.py", line 14, in <module>
    ort_session = ort.InferenceSession(MODEL_PATH)
  File "/home/jade/.pyenv/versions/maskrcnn-tracing-latest/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 25, in __init__
    self._load_model(providers)
  File "/home/jade/.pyenv/versions/maskrcnn-tracing-latest/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 43, in _load_model
    self._sess.load_model(providers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node () Op (ConstantOfShape) [ShapeInferenceError] Invalid shape value: 0

Environment

PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243

OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2

Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 440.44 cuDNN version: Probably one of the following: /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5

Versions of relevant libraries: [pip3] numpy==1.18.1 [pip3] onnx==1.6.0 [pip3] onnxruntime==1.1.0 [pip3] onnxruntime-gpu==1.1.0 [pip3] torch==1.3.1 [pip3] torchvision==0.4.2 [conda] Could not collect

Li-chunming commented 4 years ago

I gor the same problem as u .Maybe we should use Pytorch 1.2 nightly to export.