yonghenglh6 / DepthwiseConvolution

A personal depthwise convolution layer implementation on caffe by liuhao.(only GPU)
525 stars 185 forks source link

Do you have implemented the "cpp" file of DepthwiseConvolution for CPU? #4

Open csyking opened 7 years ago

csyking commented 7 years ago

You maybe only have implemented the layer for CUDA, but the implementation of CPU is still only the original "Caffe's conv+group"?

The same issue is here: https://github.com/Zehaos/MobileNet/issues/22

I wonder if you will implement the DepthwiseConvolutionLayer for CPU? Any contribution will be grateful!

Best.

ONLY-VEDA commented 7 years ago

He already implement the cpu version,you can find that in code.

mychina75 commented 7 years ago

But looks like cpu version does not optimized for depthwise conversion.

yonghenglh6 commented 7 years ago

I'm sorry for uncertainty about this. It's a tough work and I am right busy on other work.

ryusaeba commented 7 years ago

Hi @yonghenglh6
Can we use your cpp/hpp/cu files to load MobileNet you pasted as pretrained weight to do finetune work? I have this question is because when we update conv to depthwsie, caffe still can load the pretrained weight? Or caffe base on layer {name} to load the pretrained weight?

ryusaeba commented 7 years ago

I check the website http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html and saw the following statement. "If we provide the weights argument to the caffe train command, the pretrained weights will be loaded into our model, matching layers by name." Therefore, I assume your answer is correct. If I am wrong, please correct me. Thanks!

yonghenglh6 commented 7 years ago

@ryusaeba Yes, that's why I use the original conv_param instead of new special param. You can just change the type without compatible price.

ryusaeba commented 7 years ago

@yonghenglh6 Thanks! I have got all pass message by using check.py. Then I apply DepthWiseConvlution on https://github.com/shicai/MobileNet-Caffe inference path, the TOP-1 result (accuracy) is the same but I get slight difference on loss. I assume the loss will be the same. Do you have any idea about this?

yonghenglh6 commented 7 years ago

@ryusaeba
The slight difference comes from the blas algorithm. I did not employ the blas lib, while the original conv layer did. I assume the blas algorithm sacrifice slight precision to get better performance, because the depthwise outputs matches my handcraft computation.

libra7 commented 7 years ago

hello ,do you implement the DepthwiseConvolutionLayer for CPU?

sunjunlishi commented 6 years ago

.....wait