Closed rohit497 closed 2 years ago
Please check with the exporter group . There could be various reasons why only one input shows up in the ONNX model graph (maybe the Torch exporter deems one input to be a constant and it ends up being an initializer in the ONNX model).
This is the hint I used to make that statement:
cc @garymm can you take a look? Thx.
@rohit497 I'm pretty sure the code as written does not run to the point of being able to export.
mask = np.ones((23, 3))
mask = mask.to('cuda')
prints:
AttributeError: 'numpy.ndarray' object has no attribute 'to'
I agree with Hari that it's quite possible the trace is seeing one of the inputs as a constant and not as a variable. See https://pytorch.org/docs/master/onnx.html#avoid-numpy-and-built-in-python-types.
I suggest exporting with verbose=True
and:
If you tried the suggestions and are still having issues using the latest nightly PyTorch, please open an issue in github.com/pytorch/pytorch.
I'm trying to export a deep neural network (AutoInt) pytorch model with 2 inputs to a .onnx model. I'm able to successfully save the model but when I try to create an onnxruntime.InferenceSession, the session.get_inputs() function only returns 1 input and I'm not sure why
System information
Model architecture
Model initialization and saving