Closed eeric closed 7 years ago
Because many acceleration methods failed on very deep models like ResNet-50, Xception-50. 18, 34 are not bottleneck architecture. They are similar with VGG-16.
On Thu, Oct 19, 2017 at 2:54 AM, eeric notifications@github.com wrote:
As mentioned in the title. I guess that the more is the number of layer, the more is better in speed after pruning.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/yihui-he/channel-pruning/issues/40, or mute the thread https://github.com/notifications/unsubscribe-auth/AJkBS8f7f_IH6ELjzz9419oBbgII3GDsks5stvIfgaJpZM4P-xcE .
-- Best, Yihui He yihui-he.github.io
It's not like that. 2X in FLOPs that was impractical.
Nothing more than this. Pruning isn't not a better optimization method.
To your knowledge, which work could practically accelerate ResNet-50 up to 2X without special implementation?
On Thu, Oct 19, 2017 at 3:04 AM, eeric notifications@github.com wrote:
It's not like that. 2X in FLOPs that was impractical.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yihui-he/channel-pruning/issues/40#issuecomment-337818109, or mute the thread https://github.com/notifications/unsubscribe-auth/AJkBSyOR0YiGDOX4PiD_-R6Nv9-Gfb4Mks5stvRjgaJpZM4P-xcE .
-- Best, Yihui He yihui-he.github.io
Not yet. It is not fit to ResNet model to prune.
As mentioned in the title. I guess that the more is the number of layer, the more is better in speed after pruning.