ethanhe42 / channel-pruning

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
https://arxiv.org/abs/1707.06168
MIT License
1.08k stars 310 forks source link

why did you prune ResNet-50, Instead of ResNet-18, ResNet-34? #40

Closed eeric closed 7 years ago

eeric commented 7 years ago

As mentioned in the title. I guess that the more is the number of layer, the more is better in speed after pruning.

ethanhe42 commented 7 years ago

Because many acceleration methods failed on very deep models like ResNet-50, Xception-50. 18, 34 are not bottleneck architecture. They are similar with VGG-16.

On Thu, Oct 19, 2017 at 2:54 AM, eeric notifications@github.com wrote:

As mentioned in the title. I guess that the more is the number of layer, the more is better in speed after pruning.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/yihui-he/channel-pruning/issues/40, or mute the thread https://github.com/notifications/unsubscribe-auth/AJkBS8f7f_IH6ELjzz9419oBbgII3GDsks5stvIfgaJpZM4P-xcE .

-- Best, Yihui He yihui-he.github.io

eeric commented 7 years ago

It's not like that. 2X in FLOPs that was impractical.

eeric commented 7 years ago

Nothing more than this. Pruning isn't not a better optimization method.

ethanhe42 commented 7 years ago

To your knowledge, which work could practically accelerate ResNet-50 up to 2X without special implementation?

On Thu, Oct 19, 2017 at 3:04 AM, eeric notifications@github.com wrote:

It's not like that. 2X in FLOPs that was impractical.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yihui-he/channel-pruning/issues/40#issuecomment-337818109, or mute the thread https://github.com/notifications/unsubscribe-auth/AJkBSyOR0YiGDOX4PiD_-R6Nv9-Gfb4Mks5stvRjgaJpZM4P-xcE .

-- Best, Yihui He yihui-he.github.io

eeric commented 7 years ago

Not yet. It is not fit to ResNet model to prune.