Eric-mingjie / rethinking-network-pruning

Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)
MIT License
1.51k stars 293 forks source link

prune mobilenetv2 #5

Open CF2220160244 opened 5 years ago

CF2220160244 commented 5 years ago

hello, @liuzhuang13 @Eric-mingjie ,have you ever do the pruning of mobilenetV2? I try to prune mobilenetV2 with several methods, it seems hard to train the pruned model to convergence in imagenet.

liuzhuang13 commented 5 years ago

@CF2220160244 Could you share the code (for pruning and training from scratch) and concrete results, so we can look into possible reasons?

CF2220160244 commented 5 years ago

oh, i did not use the algorithms in your rethinking-network-pruning implement, training the unpruned mobilenetv2 need more epoches than common backbone network. training the pruned mobilenetV2 also need more epoches.

luluvi commented 5 years ago

hello, @liuzhuang13 @Eric-mingjie ,have you ever do the pruning of mobilenetV2? I try to prune mobilenetV2 with several methods, it seems hard to train the pruned model to convergence in imagenet.

Hello , I am trying to pruning MobileNetV2,but I cannot find any information or papers,so I want to know ,what is the effect of the pruning, is it feasible, does the accuracy make a big difference?

liuzhuang13 commented 5 years ago

@luluvi Sorry but we don't have much experience on mobilenet pruning.

eezywu commented 5 years ago

@CF2220160244 @luluvi Hi~You can check the code here. I will finish the introduction of how to prune MobilenetV2 when I have time.

mstoye commented 5 years ago

Thank's a lot for charing, did you try on the imagnet dataset? The link seems not to specify the dataset and I would be interested to find results for channel pruning of MobilnetV2 on imagenet.

apxlwl commented 4 years ago

@CF2220160244 @luluvi Hi, I apply the network-slimming approach on mobilenetv2 MobileNet-v2-pruning. The code is in pytorch and organized like this project. I hope this could help.

leoluopy commented 3 years ago

@wlguan Hi , how much does the inference time decrease after your pruning ? any statistics ?