NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter
MIT License
4.56k stars 673 forks source link

Serving with TRTIS #212

Closed dhkim0225 closed 4 years ago

dhkim0225 commented 4 years ago

Hello. I want to serve the model using torch2trt and tensorrt inference server.

However, I keep getting problems when deserializing the model. I created a repository to reproduce this issue. https://github.com/dhkim0225/reproduce-torch2trt-issue-212

I'm not sure but think it's because of this line. https://github.com/NVIDIA/tensorrt-inference-server/blob/e027e9692fb74fe2af72b21f33b57373569e68d2/src/backends/tensorrt/plan_backend_factory.cc#L56

torch2trt use a namespace 'torch2trt', but initLibNvInferPlugins is only called with null namespace.

How can I use TRTIS with torch2trt?

dhkim0225 commented 4 years ago

I forked torch2trt and remove namespace.

Also Linking pytorch c10 library solved my problem.

forked torch2trt(removed namespace version) : https://github.com/dhkim0225/torch2trt.git updated git repository(Dockerfile links pytorch c10 lib) : https://github.com/dhkim0225/reproduce-torch2trt-issue-212