PRBonn / lidar-bonnetal

Semantic and Instance Segmentation of LiDAR point clouds for autonomous driving
http://semantic-kitti.org
MIT License
945 stars 205 forks source link

The inference time on Jetson AGX! #18

Closed SongyiGao closed 4 years ago

SongyiGao commented 4 years ago

Thank you very much for your work!

I find the Runtime of RangeNet53++(64 × 2048 px) is 188ms on Jetson AGX. But in my device,it is 450ms, I want to know your lab environment Settings, and whether there are additional operations? 捕获

tano297 commented 4 years ago

Hi,

I have not released the full tensorRT deployment pipeline yet, and a doubling of the fps is roughly obtained by this further optimization. However, @Chen-Xieyuanli released a good approximation of it for his work on semantic mapping. Can you try this and report back?

When I release the full inference pipeline later this year, I will also include int8 inference quantization, so it should even get faster than 5 FPS, and closer to 10

SongyiGao commented 4 years ago

Hi,

I have not released the full tensorRT deployment pipeline yet, and a doubling of the fps is roughly obtained by this further optimization. However, @Chen-Xieyuanli released a good approximation of it for his work on semantic mapping. Can you try this and report back?

When I release the full inference pipeline later this year, I will also include int8 inference quantization, so it should even get faster than 5 FPS, and closer to 10

Thank you for your reply. I tried to convert Pytorch model to tensorRT using tensor2trt library. The transformation was successful, but the output corresponding to different inputs is the same.I would like to know if your transformation process is to convert the Pytorch model to ONNX model and then to tensorRT.

tano297 commented 4 years ago

Yes, our process right now is pytorch -> onnx -> tensorrt. In general, representing networks in onnx is a good idea, because if you change frameworks you can reuse your inference pipeline

tano297 commented 4 years ago

Have a look at the code I referenced to for an example on how to achieve this

TT22TY commented 4 years ago

Hi,

I have not released the full tensorRT deployment pipeline yet, and a doubling of the fps is roughly obtained by this further optimization. However, @Chen-Xieyuanli released a good approximation of it for his work on semantic mapping. Can you try this and report back?

When I release the full inference pipeline later this year, I will also include int8 inference quantization, so it should even get faster than 5 FPS, and closer to 10

Hi, @tano297 ,

When will be the full tensorRT deployment pipeline of semantic segmentation released?

Thanks!