NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter
MIT License
4.62k stars 677 forks source link

assert(permutation[0] == 0) # cannot move batch dim #377

Closed NazarovAV closed 2 years ago

NazarovAV commented 4 years ago

Hi! I'm tring to convert GRCNN to tensorrt, but get error:

 import torch
 from torch2trt import torch2trt
 from GRCNN import GRCNN
 model = GRCNN(23)
 x = torch.ones(1,1,32,100).float().cuda()
 model = GRCNN(23).eval().cuda()
 model_trt = torch2trt(model, [x])

Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6-linux-aarch64.egg/torch2trt/torch2trt.py", line 436, in torch2trt outputs = module(inputs) File "/home/nvidia/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(input, **kwargs) File "/home/nvidia/zmq_service_for_detectlp/grcnn/GRCNN.py", line 95, in forward conv = conv.permute(2, 0, 1) # [w, b, c] File "/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6-linux-aarch64.egg/torch2trt/torch2trt.py", line 218, in wrapper converter"converter" File "/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6-linux-aarch64.egg/torch2trt/converters/permute.py", line 17, in convert_permute assert(permutation[0] == 0) # cannot move batch dim AssertionError

If I convert grcnn to onnx I get warning, but grcnn.onnx is created:

torch.onnx.export(model, x, "grcnn.onnx", input_names=['input'], output_names=['output'], export_params=True) /home/nvidia/zmq_service_for_detectlp/grcnn/GRCNN.py:93: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert h == 1, "the height of conv must be 1" /home/nvidia/.local/lib/python3.6/site-packages/torch/onnx/symbolic_opset9.py:1436: UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable lenght with LSTM can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model. "or define the initial states (h0/c0) as inputs of the model. ")

then, when I try convert onnx to tensorrt I get an error:

with open('grcnn.onnx', 'rb') as f:
    parser.parse(f.read())

False

engine = builder.build_cuda_engine(network)

[TensorRT] ERROR: Network must have at least one output

Sukeysun commented 3 years ago

Hi, have you solved this problem? could you let me know how to solve it ?