KeyKy / mobilenet-mxnet

mobilenet-mxnet
145 stars 28 forks source link

about channelwise convolution #9

Closed wenhe-jia closed 7 years ago

wenhe-jia commented 7 years ago

I noticed that you have used ChannelwiseConvolution in mobilenet-symbol.json file and mobilenet-faster.py. I am using mxnet of version 0.11.1, do i have to use ChannelwiseConvolution with depthwiseConvolution instead? Because your update say that mxnet 0.11.1 support depthwiseConvolution. Thanks for your reply!

KeyKy commented 7 years ago

Now you do not need to use ChannelwiseConvolution just use num_group in mxnet 0.11.1. I will delete the mobilenet-faster.py symbol.

BiranLi commented 7 years ago

@KeyKy @LeonJWH And Cudnn V7 support grouped conv now!

KeyKy commented 7 years ago
  1. Up to 2.5x faster training of ResNet50 and 3x faster training of NMT language translation LSTM RNNs on Tesla V100 vs. Tesla P100
  2. Accelerated convolutions using mixed-precision Tensor Cores operations on Volta GPUs
  3. Grouped Convolutions for models such as ResNeXt and Xception and CTC (Connectionist Temporal Classification) loss layer for temporal classification

Good to hear that!

BiranLi commented 7 years ago

It approch 10% up in speed of normal conv.

wenhe-jia commented 7 years ago

In repo https://github.com/shicai/MobileNet-Caffe only regular conv layers are used with group=32 as depthwise conv layers, what about depthwise convolution? Is there any difference between standard convolution and depthwise convolution?

wenhe-jia commented 7 years ago

ok, thanks a lot!