Closed TengFeiHan0 closed 4 years ago
We use TensorRT 7.0, and had to make some minor modifications to the model structure in order to successfully convert it (it's the same model, but written in a different way). We will soon release our TensorRT conversion script, including converted onnx models.
TensorRT 7.1.3 provides a script to convert packnet
to TRT compatible onnx, however the onnx editing tool (which the script depends upon) is missing in their package. Hopefully it is updated soon as TRT 7.1.3 is GA at the moment. That's not a huge problem, it still helps by showing how and which nodes to edit.
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#onnx_packnet
@krishna-esrlabs Thank you for informing me. I'm very glad to hear that this work has been integrated into TensorRT official samples. Therefore, I only need to build a docker environment to run this sample directly, am I right? As I know, https://github.com/NVIDIA/TensorRT this version is 7.0, Does is compatible with new TRT7.1GA?
@krishna-esrlabs Thank you for informing me. I'm very glad to hear that this work has been integrated into TensorRT official samples. Therefore, I only need to build a docker environment to run this sample directly, am I right? As I know, https://github.com/NVIDIA/TensorRT this version is 7.0, Does is compatible with new TRT7.1GA?
Following up with the tensorrt docker version. I am able to reproduce the official sample with pre-built docker nvcr.io/nvidia/tensorrt:20.11-py3. This requires Python 3.6 and CUDA 11.0.
dear author, I'm trying to accelerate your given models by using tensorrt. However, I got some problems. My envs is as below: