Open zeerandhawa opened 1 year ago
Both were tested on the same nVidia GPU Tesla V100 using this Docker image https://github.com/WongKinYiu/yolov7#installation and this command:
python test.py --data data/coco.yaml --img 1280 --batch 1 --conf 0.001 --iou 0.65 --device 0 --weights yolov7-w6.pt --name yolov7_w6_1280_val
Thank you for your response.
Just to confirm, the reported FPS is only the model throughput time, right? It does not includes NMS time?
Yes, only model inference.
Can you tell me how to get the FPS? Run test.py to get the time for per image , then calculate it?
I am currently comparing inference speed of YOLOR-p6 and YOLOv7-w6 on GTX 1080 Ti.
Based on the reported results YOLOv7-w6 should have a higher FPS than YOLOR-p6.
However, using a batch size 2, YOLOv7-w6 is slower than YOLOR-p6. (YOLOv7-w6 has lower GPU utilization though).
Could you suggest why this is the case? Were the results in both repos reported on different GPUs?
Thank you for your help in advance.