hukkelas / DSFD-Pytorch-Inference

A High-Performance Pytorch Implementation of face detection models, including RetinaFace and DSFD
Apache License 2.0
218 stars 58 forks source link

TRT inference error #15

Closed rnekk2 closed 4 years ago

rnekk2 commented 4 years ago

[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer. [TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed. Traceback (most recent call last): File "test_video.py", line 33, in detector = TensorRTRetinaFace(input_imshape,inference_imshape) File "/data/DSFD-Pytorch-Inference/face_detection/retinaface/tensorrt_wrap.py", line 38, in init self.context = self.engine.create_execution_context() AttributeError: 'NoneType' object has no attribute 'create_execution_context'

Seeing issues running inference using tensorrt. How to fix it.

Tensorrt version 7.1.3 Torch version - 1.4.0

hukkelas commented 4 years ago

Hi, you were not able to build an engine.

Sadly, I do not have the capacity to help you debug this issue with tensorRT, as this is not something I'm not that experienced with. If you are not familiar with TensorRT, I recommend you to use the default pytorch version.

JohannesTK commented 4 years ago

@hukkelas what is the Tensorrt version you are using?

rnekk2 commented 4 years ago

The issue is fixed by increasing the workspace size. I am using TRT 7