zombie0117 / yolov3-tiny-onnx-TensorRT

convert your yolov3-tiny model to trt model
90 stars 34 forks source link

Why is the _process_yolo_output in the tensorrRT sample code so slow? It takes 0.3 seconds to execute once #14

Open gzchenjiajun opened 4 years ago

gzchenjiajun commented 4 years ago

I use tensorRT code for reasoning yolov3_tiny, the single reasoning time can be up to 0.03s, the pure reasoning is close to 25fps, but why does the _process_yolo_output in data_processing.py need 0.3s once?

How do I accelerate? My input image size is only 416.

I have one more question, that is, I feel the GPU usage of jetson nano is not stable. I open tegrastats to check, "EMC_FREQ 0% GR3D_FREQ 99%", sometimes 99%, sometimes 13%, sometimes 0%, is this normal? When I run my yolov3_tiny, my memory is full, too

How do I fix it, thank you

Why is the _process_yolo_output in the tensorrRT sample code so slow? It takes 0.3 seconds to execute once-nvidia Developer Forums https://devtalk.nvidia.com/default/topic/1072376/jetson-nano/why-is-the-_process_yolo_output-in-the-tensorrrt-sample-code-so-slow-it-takes-0-3-seconds-to-execute-once/? Offset = 2 # 5432984

I also went to NVIDIA's community to ask, but I haven't gotten a reply yet

AadeIT commented 4 years ago

Have you solved the problem yet? I'm here, too. Please help me