Closed schyun9212 closed 4 years ago
There is no concept of tensor list in ONNX. Without this concept, it is very hard to export operators that consume or produce tensor list, especially when the length of the tensor list is not known at export time.
x = torch.tensor([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])
# This is not exportable
class Model(torch.nn.Module):
def forward(self, x):
return x.unbind(0)
# This is exportable.
# Note that in this example we know the split operator will always produce exactly three outputs,
# Thus we can export to ONNX without using tensor list.
class AnotherModel(torch.nn.Module):
def forward(self, x):
return [torch.squeeze(out, 0) for out in torch.split(x, [1,1,1], dim=0)]
🐛 Bug
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Environment
PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2
Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 440.44 cuDNN version: Probably one of the following: /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
Versions of relevant libraries: [pip3] numpy==1.18.1 [pip3] torch==1.3.1 [pip3] torchvision==0.4.2 [conda] Could not collect