Closed ShreyasSkandanS closed 5 years ago
Hi ShreyasSkandanS,
Thanks for reaching out. Could you share the version of TensorRT you are using?
Also, can provide the PyTorch model you're using? Sometimes the solution can be a bit nuanced.
The trained weights likely aren't necessary.
Best, John
I would prefer to share the model + weights offline, if that is possible?
Sure, jwelsh@nvidia.com
Silly me, I believe I just reached out to you.
Solved.
i got same problem, any help?
i got same problem, any help?
Hi all,
I believe the solution was to increase max workspace size. You can do this by setting max_workspace_size parameter. For example
‘’model_trt = torch2trt(model, [data], max_workspace_size=1<<25)’’
Should work.
Best, John
Hi all,
I believe the solution was to increase max workspace size. You can do this by setting max_workspace_size parameter. For example
‘’model_trt = torch2trt(model, [data], max_workspace_size=1<<25)’’
Should work.
Best, John
This answer really help me a lot. Thanks!
I have a network (based on ERFNet) and some trained weights for this model. I'm trying to convert it from PyTorch to TensorRT and I do the following:
But I get the above error. Is there an unsupported layer in here somewhere? I don't believe there to be any fancy layers in this architecture. I'm happy to share both the network design and weights files. I tested this both on my Laptop (MX150) and a Jetson Xavier (Jetpack 4.2).
The network was trained using PyTorch 1.1.