CaoWGG / TensorRT-CenterNet

tensorrt5 , centernet , centerface, deform conv, int8
767 stars 158 forks source link

converted tensorrt engine causes Segmentation fault #54

Open WIll-Xu35 opened 4 years ago

WIll-Xu35 commented 4 years ago

Hi all,

I've trained my own ctdet_dla_34 model, with 10 objects to detect. The training was success and torch inference was fine.

I followed the instruction in this repo to generate onnx model. But the generated onnx model cannot be used and raises the following error: onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9

I tried to ignore this error and went on to convert it to TensorRT engine, and the engine build finished without errors. But when I tried to load the engine using the following code: with open('test.engine', 'rb') as f, trt.Runtime(TRT_LOGGER) as runtime: engine = runtime.deserialize_cuda_engine(f.read()) it raises the following error: Segmentation fault (core dumped) And this is the only output from the execution.

My environment is pytorch 1.0, ubuntu 1604, TensorRT 5.0.2, onnx-tensorrt v5.0, cuda 9.0, and all this is constructed inside a docker container.

Any idea what might be wrong and how to solve this problem?

Much appreciated.

Jumponthemoon commented 4 years ago

@WIll-Xu35 Hi,did you solve it?

WIll-Xu35 commented 4 years ago

@qianchenghao Nope, I used dlav0 instead

Jumponthemoon commented 4 years ago

@qianchenghao Nope, I used dlav0 instead

Ok,i‘ll give a try.Thanks!