Open Patrick-PhoenixAI opened 4 years ago
Hi, this repo is very helpful for converting the model quickly: https://github.com/NVIDIA-AI-IOT/torch2trt
Hi @PingoLH
I am trying to convert the HarDNet pytorch segmentation model into a TensorRT engine to speedup the process, I want to use the model on a Jetson Xavier device.
Could you please give some advice or some guidance for the conversion process?
I can give you some advices based on my own experience. I have converted the HardNet model to TensorRT, and the inference speed including preprocessing the image is about 28ms (640x640). Here are the key steps: 1.) replace the F.interpolate with your custom interpolation method (you just need to register a new class using python), then convert the model to onnx file (I use PyTorch 1.2.0, onnx opset_version:9) 2.) implement a plugin of interpolation using TensorRT API, and add the plugin lib to the tool `https://github.com/onnx/onnx-tensorrt.git' . After compiled and linked the plugin, you can convert the onnx file to TensorRT engine file. 3.) use OpenCV to read images, and deserialize the TensorRT engine file to run inference. (by c++)
@JosephChenHub Could you please tell me how to replace the F.interpolate with your custom interpolation method. Thank you for your help!
Hi @PingoLH
I am trying to convert the HarDNet pytorch segmentation model into a TensorRT engine to speedup the process, I want to use the model on a Jetson Xavier device.
Could you please give some advice or some guidance for the conversion process?