ryujaehun / pytorch-gpu-benchmark

Using the famous cnn model in Pytorch, we run benchmarks on various gpu.
MIT License
226 stars 85 forks source link

resnet with batchsize=12? #10

Closed twmht closed 4 years ago

twmht commented 4 years ago

is this the result (https://github.com/ryujaehun/pytorch-gpu-benchmark/blob/master/fig/new_2080ti/GeForce_RTX_2080_Ti_1_gpus__half_model_inference.png) of running resnet with batch size=12?

I just found out that my inference time with 2080_Ti is 15ms with batch size=12, which is different with yours.

any idea?

ryujaehun commented 4 years ago

Maybe the difference is caused by cudnn. The 2080ti has tensorcore and uses tensorcore in some sizes such as 16x16x16. However, in pytorch, this cannot be determined, and cudnn decides, so I think it is.