BVLC / caffe

Caffe: a fast open framework for deep learning.
http://caffe.berkeleyvision.org/
Other
34.1k stars 18.7k forks source link

depth wise convolution #5649

Open zjchuyp opened 7 years ago

zjchuyp commented 7 years ago

Caffe training depth wise convolution is very slow. Is there has plan to reimplement the depth wise convolution?

lolongcovas commented 7 years ago

Do you mean the parameter group in conv layer ?

On 26 May 2017 9:40 a.m., "zjchuyp" notifications@github.com wrote:

Caffe train depth wise convolution is very slow. Is there have plan to reimplement the depth wise convolution?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BVLC/caffe/issues/5649, or mute the thread https://github.com/notifications/unsubscribe-auth/ADfjjE3ju8Qv8kSVwdXbe2MygHJWK0vDks5r9oIIgaJpZM4NnRlG .

zjchuyp commented 7 years ago

Yes, depth wise convolution is in paper "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" https://arxiv.org/abs/1704.04861 Caffe can train this net by set group number ==input channel number, but train speed is very slow because Caffe use "for" to do group number times im2col+sgemm. TF has new implement to depth wise conv.

lolongcovas commented 7 years ago

I also tried it several weeks ago, you are right, low speed and high memory consuming.

ccJia commented 7 years ago

I met this problem either. I saw the TF function called "DepthwiseConv2DKernel" , I didn't find any difference except TF uses EIGEN. Do you solve this problem?

willyd commented 7 years ago

You may be interested in this https://github.com/BVLC/caffe/pull/5665

lolongcovas commented 7 years ago

@zjchuyp

Yes, depth wise convolution is in paper "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" https://arxiv.org/abs/1704.04861 Caffe can train this net by set group number ==input channel number, but train speed is very slow because Caffe use "for" to do group number times im2col+sgemm. TF has new implement to depth wise conv.

I think Caffe doesnt perform im2col #group times:


void BaseConvolutionLayer<Dtype>::forward_cpu_gemm(const Dtype* input,
    const Dtype* weights, Dtype* output, bool skip_im2col) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    if (!skip_im2col) {
      conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());
    }
    col_buff = col_buffer_.cpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /
              group_, conv_out_spatial_dim_, kernel_dim_,
              (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,
              (Dtype)0., output + output_offset_ * g);
  }
}
zjchuyp commented 7 years ago

@lolongcovas You are right! thx.

zjchuyp commented 7 years ago

@willyd thanks a lot, I'll try it.

mathmanu commented 7 years ago

@lolongcovas, @willyd Can you please share your commit/code if you have for this? Thanks.

ccJia commented 7 years ago

@zjchuyp Hi, TF also use a convert to combine the continue memory for gemm. And depend on it's data structure ( traveled by the channel ), it has more continued memory which can use SIMD to get a high speed . And it also has a process to combine the data like "Im2col" in Caffe. So why use this way , it can faster several times than caffe?

winggan commented 7 years ago

It is still slow using cudnn implementation? According to the code, cudnn convolution calls w.r.t to all groups are all asynchronous at different cuda streams and will be synchronized at the end of forward/backward. Therefore GPU should be make use of as much as possible.

birdwcp commented 7 years ago

@gzygzy9211 I turn off cudnn or it will crash (Check failed: status == CUDNN_STATUS_SUCCESS).

birdwcp commented 7 years ago

@willyd thanks a lot

winggan commented 7 years ago

@birdwcp I think you should digging into it to find the reason

alialbawi commented 7 years ago

hi how are u all pls i am looking about conv layer with out im2col i want it to take input from im2col output

mprat commented 6 years ago

To get faster depthwise convolutions there is a separate gemm call that needs to be implemented. As far as I know, no one against this version of Caffe has submitted a PR to do so.