Closed Jimaras08 closed 1 year ago
Probably an issue with K80, see https://github.com/NVIDIA/TensorRT/issues/1816 or https://github.com/NVIDIA/TensorRT/issues/2039. Switching from K80 to P40 worked like a charm. FYI - exporting and inferencing is unlikely to work on different environments!
I had an similar issue like yours, I have solved it with old tensorrt version. You can try with "nvidia-tensorrt-8.4.1.5" and may solve it.
GPU memory is not enough to support your convert work
Hi,
Exporting .pt to .onnx works:
Exporting .onnx to .trt doesn't:
I'm working within Azure ML on Standard_NC24 (4 GPU) where 1 GPU = one-half K80 card.
Any help would be greatly appreciated.
Thank you!