TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.24k stars 243 forks source link

details about your tensorrt implementation #15

Closed TengFeiHan0 closed 4 years ago

TengFeiHan0 commented 4 years ago

dear author, I'm trying to accelerate your given models by using tensorrt. However, I got some problems. My envs is as below:


TensorRT7.0
onnx-tensorrt
pytorch1.4.0
onnx1.6.0
    ```\
when converting the onnx style into tensrrt, I found that some operations(Pad, group norm) of the given model couldn't be parsed successfully. I was thinking that using the newest TensorRT could cover all operations. So what's your tensorrt version?  if possible, would you mind sharing your converted onnx model with me?  @AdrienGaidon-TRI @VitorGuizilini-TRI @spillai 
VitorGuizilini-TRI commented 4 years ago

We use TensorRT 7.0, and had to make some minor modifications to the model structure in order to successfully convert it (it's the same model, but written in a different way). We will soon release our TensorRT conversion script, including converted onnx models.

krishna-esrlabs commented 4 years ago

TensorRT 7.1.3 provides a script to convert packnet to TRT compatible onnx, however the onnx editing tool (which the script depends upon) is missing in their package. Hopefully it is updated soon as TRT 7.1.3 is GA at the moment. That's not a huge problem, it still helps by showing how and which nodes to edit.

https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#onnx_packnet

TengFeiHan0 commented 4 years ago

@krishna-esrlabs Thank you for informing me. I'm very glad to hear that this work has been integrated into TensorRT official samples. Therefore, I only need to build a docker environment to run this sample directly, am I right? As I know, https://github.com/NVIDIA/TensorRT this version is 7.0, Does is compatible with new TRT7.1GA?

Kirstihly commented 3 years ago

@krishna-esrlabs Thank you for informing me. I'm very glad to hear that this work has been integrated into TensorRT official samples. Therefore, I only need to build a docker environment to run this sample directly, am I right? As I know, https://github.com/NVIDIA/TensorRT this version is 7.0, Does is compatible with new TRT7.1GA?

Following up with the tensorrt docker version. I am able to reproduce the official sample with pre-built docker nvcr.io/nvidia/tensorrt:20.11-py3. This requires Python 3.6 and CUDA 11.0.