Closed s0606757 closed 6 years ago
refer to the prototxt https://github.com/yihui-he/channel-pruning/blob/master/temp/channel_pruning.prototxt
Sorry bother you again^^" I don't understand it fully. let me take an example for my question.
For example: VGG-16 , speed:5x layer { name: "conv1_1" type: "Convolution" bottom: "data" top: "conv1_1" convolution_param { num_output: 24 pad: 1 kernel_size: 3 } }
1.The size of output activations is 224x224x24 after conv1_1? 2.The elements in the output activation volume must be all non-zero?
yes
I appreciated your time and effort for answering my question ^_^
Actually, my research is on CNN accelerator for supporting sparse CNN. Therefore, I am surveying the sparse model such as alexnet, VGG, and Resnet. Do you know which paper also released their pruned VGG or sparse VGG model on Internet like you?
lastly, words are not enough to express my gratitude.Thank you so much!!!!
Hi~
May I ask a question about the sparsity of VGG-16 (version : channel pruning , speedup : 5x)
could you list it to me ?
Thank you so much for helping me^_^