Open SMDR1412 opened 2 months ago
Hello, I also found this problem, when I was doing object detection, v10n is worse than v8n, especially the speed, I tested on 4080loptop.Please help me for this answer, thank you.
Hello, I also found this problem, when I was doing object detection, v10n is worse than v8n, especially the speed, I tested on 4080loptop.Please help me for this answer, thank you.
Hello my friend。According to the author, using the PyTorch framework for inference with YOLOv10 goes through additional cv2cv3 layers, which are not used in actual inference. Therefore, when comparing with other models, the exported ONNX architecture is used for inference testing. After exporting YOLOv10s, it is indeed much faster compared to other excellent models like YOLOv8s and YOLOv5s. The next step will be to perform inference testing using TensorRT.
Hello, can I get the scripts or commands you use for comparing the speed of ONNX for v5, v8, and v10? Thank you.
I'm using yolov10s to train a smoke detection dataset, and when testing the FPS results, I found it to be slower than yolov8s and yolov5s. I wonder if it's due to some mistake on my part. Could you please provide some guidance?