liuzhuang13 / slimming

Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.
MIT License
558 stars 72 forks source link

cifar10 flops higher than cifar100 on DenseNet(40% pruned) #8

Open Sirius083 opened 5 years ago

Sirius083 commented 5 years ago

Thanks for your great work, I have a small question related with calculating flops In paper Table 1 cifar10 DenseNet-40 (40% Pruned), model FLOPs is 3.8110^8 cifar100 DenseNet-40 (40% Pruned), model FLOPS is 3.7110^8 Since cifar100 has 100 classes , while cifar10 has 10 classes Why is the flops in cifar10 higher than flops in cifar100 in the same model Thanks in advance

liuzhuang13 commented 4 years ago

Because these are two different models, and the algorithm prunes different part of the networks. Even if you prune a fixed amount of channels (40% in this case), FLOPs will be dependent on where you prune. For example, if you prune early layers more, you'll reduce more FLOPs since they have larger activation maps.