Open liminghu opened 3 years ago
Ah...I think comparing the speed of these tiny models on a GPU is quite meaningless, as they are almost found on NPU or mobile devices. Since the optimization and acceleration are different across different hard devices, the best practice is to implement several models and test your own device's latency. For example, we see yolox is even faster than yolov5 in this case: 知乎
I tested the YOLOX-tiny, I got better performance than in the paper:
on the validation data, YOLOX-tiny can get AP@0.5:0.95 : 0.3227, AP@0.5: 0.493.
I also tested YOLOX-Nano:
The inference resolution: 416*416 On the validation, I got mAP@0.5:0.95: 0.2387 AP@0.5: 0.39
The performance is a little bit worse than the paper.
The inference speed can really depend on the I/O speed of your RAM.
Ah...I think comparing the speed of these tiny models on a GPU is quite meaningless, as they are almost found on NPU or mobile devices. Since the optimization and acceleration are different across different hard devices, the best practice is to implement several models and test your own device's latency. For example, we see yolox is even faster than yolov5 in this case: 知乎
yolox-tiny on ncnn only 10 fps on Gen8 i7 CPU yolo4-tiny can more then 30 fps
According to your paper: YOLOX-nano/tiny has less parameters than YOLOV4-tiny, and better COCO AP(%).
According to: https://github.com/AlexeyAB/darknet/issues/7928 YOLOv4-tiny (3l): 38.6% AP, 182 fps (end-to-end inference)
What is the speed of YOLOX-tiny/Nano for inference compared with YOLOv4-tiny (3l)? and the related mAP@0.5?
Thanks.