Open vadimkantorov opened 8 years ago
Actually MatConvNet's convolution layer automatically switch to fully-connected layer if input size==kernal size. You can manually do the same thing in torch: example: (input size (NCHW) = 256x512x7x7, output(NxFeatureSize) = 256x4096)
model:add(nn.View(7*7*512))
model:add(nn.Linear(7*7*512,4096))
This is actually surprising, because the cudnn convolution has implicit, explicit gemms, as well as a bunch of other algorithms. Maybe their gemm is lagging behind the CUDA gemm. Would you know anything about this @ngimel ?
For backward, the selection of algorithms is smaller (in particular, there is no explicit gemm), and they are not particularly optimized for the case where input size = kernel size. cudnn does not have a runtime dependency on cublas, and includes only a limited subset of cublas gemm kernels, so even if explicit gemm algorithms were added to backward path, there conceivably could be many situations where cudnn would be slower than cublas. I think it is best (as suggested by @vadimkantorov and @Jerrynet) to convert SpatialConvolution to Linear when input size = kernel size.
thanks Natalia! it is often convenient to keep SpatialConvolution for 1x1, I think we should add nn.Linear.updateOutput(self,input)
like-calls with views around for this special case
Sergey, please note that 1x1 SpatialConvolution in NCHW does not map directly onto Linear (it would for NHWC layout for images, and similar for filters), and for Maxwell cudnn performance for this case (NCHW) should be pretty similar to cublas anyway. I don't remember Kepler benchmarks off the top of my head. The original issue was about convolution where image H*W = kH*hW
, where cudnn performance can be pretty bad. It generally does not do too good with odd (as in: not small, not square) filter sizes, especially on backward.
@ngimel afaik 1x1 SpatialConvolution in NCHW DOES map to Linear. We have used this trick many times. I think it is because gemm allows transpose as a mode. Here's a simple test case:
require 'nn'
a = nn.Linear(128, 32)
b = nn.SpatialConvolution(128, 32, 1, 1)
b.weight:copy(a.weight);
b.bias:copy(a.bias);
input = torch.randn(16, 128, 1, 1)
outlinear = a:forward(input:view(16,128))
outconv = b:forward(input)
print((outlinear - outconv):abs():max())
And the output is 8.8817841970013e-16
ohh, i assume that you are talking for larger inputs. Yes, indeed it does not map. It only maps correctly as you said, when H*W = kH*kW
. sorry for the confusion.
cudnnR4 doesn't choose the optimal algorithm in the fully-connected mode, even with cudnn.benchmark = true, which results in ~20x slower backward pass compared to MatConvNet.
Torch:
MatConvNet:
Replacing cudnn.SpatialConvolution with nn.Linear makes Torch and MatConvNet even: