Open shridharkini6 opened 5 years ago
We need a bit more information (specific steps to reproduce bugs, model in question if possible). From your description, I'm not even sure what you are doing is relevant to onnx-tensorflow.
I was trying to convert torch model to onnx.
Sounds like you need to post an issue with PyTorch developers.
@tjingrant please find the source code , model at https://drive.google.com/drive/folders/1QYlHnQyt0qgmBuuQR1z2B-Vn_ejzK1GA?usp=sharing
Please $git clone https://github.com/dangweili/pedestrian-attribute-recognition-pytorch.git
$mv peta_dataset.pkl pedestrian-attribute-recognition-pytorch/dataset/peta/
Please let me know of you find any issues running it Python 2.7.12 ONNX version: 1.4.1 ONNX-TF version: 1.2.1 Tensorflow version: 1.5.0
@shridharkini6 Could you paste your error log here? Not just let us to run it.
I had a same issue, and the error log is:
Traceback (most recent call last):
File "converter.py", line 178, in <module>
torch.onnx.export(model, img, "model.onnx", verbose=True)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/__init__.py", line 27, in export
return utils.export(*args, **kwargs)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 104, in export
operator_export_type=operator_export_type)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 281, in _export
example_outputs, propagate)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 227, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 155, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/__init__.py", line 52, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 504, in _run_symbolic_function
return fn(g, *inputs, **attrs)
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 88, in wrapper
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 88, in <listcomp>
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 45, in _parse_arg
raise RuntimeError("ONNX symbolic expected a constant value in the trace")
RuntimeError: ONNX symbolic expected a constant value in the trace
I am also facing the same problem.
The error seems coming from the pytorch converter, not onnx-tf. Please clarify and provide exact code that invokes onnx-tf backend converter. We will close the issue if determined not related to onnx-tf.
I had a same issue, and the error log is:
Traceback (most recent call last): File "converter.py", line 178, in <module> torch.onnx.export(model, img, "model.onnx", verbose=True) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/__init__.py", line 27, in export return utils.export(*args, **kwargs) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 104, in export operator_export_type=operator_export_type) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 281, in _export example_outputs, propagate) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 227, in _model_to_graph graph = _optimize_graph(graph, operator_export_type) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 155, in _optimize_graph graph = torch._C._jit_pass_onnx(graph, operator_export_type) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/__init__.py", line 52, in _run_symbolic_function return utils._run_symbolic_function(*args, **kwargs) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/utils.py", line 504, in _run_symbolic_function return fn(g, *inputs, **attrs) File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 88, in wrapper args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)] File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 88, in <listcomp> args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)] File "/home/mgu/anaconda3/envs/converter-gpu36/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 45, in _parse_arg raise RuntimeError("ONNX symbolic expected a constant value in the trace") RuntimeError: ONNX symbolic expected a constant value in the trace
same error. Have you fixed that?
Describe the bug I was trying to convert torch model to onnx. I got above error from torch/onnx/symbolic.py What could be the reason
Python 2.7.12 ONNX version: 1.4.1 ONNX-TF version: 1.2.1 Tensorflow version: 1.5.0