jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
https://jkjung-avt.github.io/
MIT License
1.74k stars 545 forks source link

Error when running on separate thread #593

Closed Mikyas1 closed 1 year ago

Mikyas1 commented 1 year ago

Hello,

I'm running into the following issue when trying to perform detection on a separate thread.

On the main thread initializing TrtYOLO and running detection on separate thread

import pycuda.autoinit 
from utils.yolo_with_plugins import TrtYOLO

trt_yolo = TrtYOLO(args.model, args.category_num, args.letter_box)

...
Thread(target=trt_yolo.detect, args=(img, conf_th), demon=False).start()

The above code give [convolutionRunner.cpp::execute::391] Error Code 1: Cask (Cask convolution execution) error.

The other one is initializing and running on separate but one thread, like the following

def run():
    Import pycuda.autoinit 
    from utils.yolo_with_plugins import TrtYOLO

    trt_yolo = TrtYOLO(args.model, args.category_num, args.letter_box)

    ...
    trt_yolo.detect(img, conf_th)

Thread(target=run, demon=False).start()

The above code produces fail to allocate CUDA resources error.

Is there a way to do detection on other thread because i need the main thread for other thing.

Thanks in advance.

jkjung-avt commented 1 year ago

Please refer to https://github.com/jkjung-avt/tensorrt_demos/issues/68 and https://github.com/jkjung-avt/tensorrt_demos/issues/213#issuecomment-691826122.

Mikyas1 commented 1 year ago

Yep, this works perfectly

TrtYOLO(args.model, (h, w), args.category_num, cuda_ctx=pycuda.autoinit.context)