BobLiu20 / YOLOv3_PyTorch

Full implementation of YOLOv3 in PyTorch
624 stars 191 forks source link

Ran some profiling on a GTX1080Ti #9

Open joaqo opened 6 years ago

joaqo commented 6 years ago

Hey, I modified the eval script a bit to run some predictions and test the FPS a bit.

I was getting much lower FPS than what is quoted in https://github.com/eriklindernoren/PyTorch-YOLOv3 so I decided to do some profiling. The core of the model (the part that runs on the GPU) runs at about 90FPS which is great, but when I add the rest of the algorithm such as NMS and input image re-scaling the FPS drops to around 15.

Am I doing something wrong? Have you tried the FPS on your setup?

Cheers!

BobLiu20 commented 6 years ago

Hi @joaqo , The FPS is measure only include backbone. And the input size is 256x256. Please review the paper and another github. image image

Anyway, I had added a FPS test script in test folder. You can use it to measure FPS in different batch size.

BobLiu20 commented 6 years ago

The FPS of full YOLOv3 from Paper: image

joaqo commented 6 years ago

Oh thanks a lot, I will give the FPS script a run on my GPU and report back in case you want to post some benchmarks with different GPUs!

XiaXuehai commented 6 years ago

@BobLiu20 I run the eval script on GTX1080,and the output is better than paper?

Batch_Size: 1, Inference_Time: 0.02235 s/image, FPS: 44.747691030733264
Batch_Size: 2, Inference_Time: 0.01832 s/image, FPS: 54.59051620351389
Batch_Size: 3, Inference_Time: 0.01636 s/image, FPS: 61.1291111108592
Batch_Size: 4, Inference_Time: 0.01537 s/image, FPS: 65.07194720359458
Batch_Size: 5, Inference_Time: 0.01536 s/image, FPS: 65.10865661941776
Batch_Size: 6, Inference_Time: 0.01509 s/image, FPS: 66.28779101314666
Batch_Size: 7, Inference_Time: 0.01504 s/image, FPS: 66.495594761547
Batch_Size: 8, Inference_Time: 0.01471 s/image, FPS: 67.99880056907445
Batch_Size: 9, Inference_Time: 0.01508 s/image, FPS: 66.3294118501506