Ghustwb / MobileNet-SSD-TensorRT

Accelerate mobileNet-ssd with tensorRT
188 stars 91 forks source link

Segmentation fault: incompatible of TersorRT version #40

Open masteryun opened 5 years ago

masteryun commented 5 years ago

when I run the command: ./mobileNet then I get the following errors: attempting to open cache file ../../model/MobileNetSSD_deploy.caffemodel.1.tensorcache loading network profile from cache... createInference The engine plan file is incompatible with this version of TensorRT, expecting 5.0.6.3got -2100382470.0.2.1852793695, please rebuild. createInference_end Bindings after deserializing: Segmentation fault (core dumped) But the requirement lists shows TensorRT4 is OK.

It looks like I should install tensorRT 5 according to the error, but when I run dpkg -l | grep TensorRT it shows: ...~/MobileNet-SSD-TensorRT$ dpkg -l | grep TensorRT ii graphsurgeon-tf 5.0.6-1+cuda10.0 arm64 GraphSurgeon for TensorRT package ii libnvinfer-dev 5.0.6-1+cuda10.0 arm64 TensorRT development libraries and headers ii libnvinfer-samples 5.0.6-1+cuda10.0 all TensorRT samples and documentation ii libnvinfer5 5.0.6-1+cuda10.0 arm64 TensorRT runtime libraries ii python-libnvinfer 5.0.6-1+cuda10.0 arm64 Python bindings for TensorRT ii python-libnvinfer-dev 5.0.6-1+cuda10.0 arm64 Python development package for TensorRT ii python3-libnvinfer 5.0.6-1+cuda10.0 arm64 Python 3 bindings for TensorRT ii python3-libnvinfer-dev 5.0.6-1+cuda10.0 arm64 Python 3 development package for TensorRT ii tensorrt 5.0.6.3-1+cuda10.0 arm64 Meta package of TensorRT ii uff-converter-tf 5.0.6-1+cuda10.0 arm64 UFF converter for TensorRT package

My TensorRT is 5! Someone can help me out with it? Thanks.

BRUCE11111 commented 5 years ago

I got the same errors as yours.Do you fix it?How? Thank you.

masteryun commented 5 years ago

nope. But I think it probably works after installing opencv, cudnn, and tensorrt via jetpack 3.3, which meets the environment requirements claimed by the author.

发自我的 iPhone

在 2019年8月24日,下午9:18,BRUCE11111 notifications@github.com 写道:

I got the same errors as yours.Do you fix it?How? Thank you.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

BRUCE11111 commented 5 years ago

Thanks for the reply.I found the cause of the problem. These code must run with TensorRT4.The reason is that the code use serialized engine (tensorRT's format) which are not portable across platforms or TensorRT versions. If we use tensorRT4,it works fine.

masteryun commented 5 years ago

Cheers!

发自我的 iPhone

在 2019年8月25日,下午3:22,BRUCE11111 notifications@github.com 写道:

Thanks for the reply.I found the cause of the problem. These code must run with TensorRT4.The reason is that the code use serialized engine (tensorRT's format) which are not portable across platforms or TensorRT versions. If we use tensorRT4,it works fine.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

ljdongysu commented 4 years ago

When I build it with RT'6 ,it turns out ‘cudaSoftmax(int, int, float, float)’ is not defined, delete cudaSoftmax function can build suddessfully,but running with error Assertion `C2 == inputDims[param.inputOrder[1]].d[0]' failed.How can deal with it ?Is it caused by RT version?

masteryun commented 4 years ago

TensorRT 4 is necessary for this project. hope this can help!

发自我的 iPhone

在 2019年11月1日,上午11:04,helloworld notifications@github.com 写道:

When I build it with RT'6 ,it turns out ‘cudaSoftmax(int, int, float, float)’ is not defined, delete cudaSoftmax function can build suddessfully,but running with error Assertion `C2 == inputDims[param.inputOrder[1]].d[0]' failed.How can deal with it ?Is it caused by RT version?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

huiofficial commented 3 years ago

Looks like you are running with TensorRT 5.x, try to use 4.0. Hope this could help!