soumith / convnet-benchmarks

Easy benchmarking of all publicly accessible implementations of convnets
MIT License
2.68k stars 578 forks source link

Add caffe with CuDNN[R4] to benchmark. #90

Open cesarsalgado opened 8 years ago

soumith commented 8 years ago

i do not think this will give us a lot more data points, but i am happy to do it. Caffe install is always a bit of a tightrope balancing act to get right, i'll do it in a few days.

cesarsalgado commented 8 years ago

Thanks!

soumith commented 8 years ago

I've ran the caffe numbers here: https://github.com/soumith/convnet-benchmarks/commit/6f718dbcfdaefe1af6c04ab2be3927e0728b599e

It is strange, because the caffe numbers look to be quite off. Alexnet: 128ms vs 81ms (Torch-fp32) Overfeat: 430ms vs 268ms (Torch-fp32) VGG-A: 680ms vs 529ms (Torch-fp32) Googlenet: 484ms. vs 470ms (Torch-fp32)

The only thing I can think of right now is that Torch enables the CuDNN autotuner (via a caching mechanism on sizes / strides ), and I suspect that Caffe does not enable it, and just uses cudnn heuristics, which are not always best perf.

In fact, now I am suspecting that maybe TF also does not enable autotuner.

The only network where Caffe looks close to Torch is Googlenet, and it seems to have serious perf regressions for the other 3. (though both are using the same code, i.e. CuDNN R4 + CuBLAS 7.5)

Should I add these numbers to the readme? Considering how sensitive the benchmarks have become, I would want someone from the Caffe ecosystem to have a quick look at the prototxt files to see if there's any new settings I should add that were recent.

beniz commented 8 years ago

Adding them with a slight warning containing your second paragraph seems a good thing to do... better than keeping with the 'native' bench IMO... Thanks for the great work. I can take a look at the Caffe bench and prototxt files a bit later in the day if this helps.

beniz commented 8 years ago

OK, so quick remarks:

soumith commented 8 years ago

@beniz def up for a PR to make it up to date. the missing ReLU are def an oversight, have to be added.

hobofan commented 8 years ago

I recently looked into the performance of Caffe when bringing our framework Leaf up to speed and I can confirm that the biggest speed hit comes from not using the autotuner. Caffe is also loosing a bit of time (IIRC 2-3ms) because it reshapes its layers on every forward pass, where it reallocates some cuDNN descriptors.