ethanhe42 / channel-pruning

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
https://arxiv.org/abs/1707.06168
MIT License
1.07k stars 310 forks source link

ResNet-50-2X Inference time larger than original ResNet50? #88

Closed HeavySword closed 5 years ago

HeavySword commented 6 years ago

@yihui-he According my tests, ResNet-50-2X Inference time is not smaller but significantly larger than ori ResNet50! I found "Filter"layer spends too much time, about 2~4ms/per filter layer with Titan Xp "(sampler) Computational cost for this operation could be ignored." -- Your paper said.

Any thing wrong with my test? thank!

ethanhe42 commented 5 years ago

Filter layer is not well implemented. In theory, computational cost for this operation could be ignored.

shiruipeng1985 commented 5 years ago

@HeavySword I have the same problem, the ResNet-50-2X inference time need 0.1s on gtx1060, but the ori ResNet50 just need 0.04s! Did you solve this problem?