lindsayshuo / yolov8_p2_tensorrtx

11 stars 1 forks source link

yolov8-pose Infer slowly on Jetson #1

Open wcycqjy opened 1 month ago

wcycqjy commented 1 month ago

Thank you for your codes. I confront a problem that when I try to infer the yolov8n-pose and yolov8s-pose on Jetson, it takes me about 1200ms to finish one image, even slower than pytorch model in python which only takes around 40ms. This is so weird.

lindsayshuo commented 1 month ago

Before running. Need to run empty images several times. The engine needs to be warmed up before the speed can increase

wcycqjy commented 1 month ago

Thank you. Now it's okay. Can you briefly explain why it needs warmup? I have run yolov5 before in tensorrt on jetson but didn't confront this issue.

lindsayshuo commented 1 month ago

YOLOv5 in tesnorrtx also needs to be preheated, and after testing, you will find that the first inference speed is different from the later ones. The Jetson platform may use dynamic frequency scaling to manage power and thermal constraints. Running a few initial inferences can help the system decide on the optimal clock speeds for the CPU and GPU, improving the performance for subsequent inferences.