Closed Dave0995 closed 2 years ago
I think the context should be made only once, not made each time you call inference.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Env
About this repo
Your problem
i'm using gRPC to deploy yolov5 using tensorRT in a Jetson AGX Xavier. when the client call for the inference in the server, the comunication stop and the socket closed by segmentation fault core dumped error. Debuging the code, i found that all happens when the execution context is created in this line:
self.cfx = cuda.Device(0).make_context()
The weird thing is, when i use the inference engine isolated, the segmentation fault core dumped never happens.
Maybe, would you tell me if this error is new or already happened?
Thanks for your help!!!