zhehui-wang / EMOMC

1 stars 2 forks source link

why my compression performance is much worse than ur paper #1

Open zshzhang opened 2 years ago

zshzhang commented 2 years ago

the steps i have done, used all the default parameters:

  1. use mode0 pre-train & pre-prune the net. GOT pre_trained_0_0.pth and pre_trained_0_0_20.pth~pre_trained_0_0_80.pth(lenet on minst), and series of pre_trained_1_1 / 2_1 (mobilenet and VGG16 on cifar10)
  2. use "EMOMC --mode --net --data --flow --code "independently,(popsize=20,generation=250) and got EA results
  3. transform the unit to plot and check data

model VGG use the upper method in the code (default) i'm a noob and cant find my PC's MAC&W&R efficiency so i didn't change it in energy computation .

the results i got : fig5 plz ignore the quoted numbers that i got them by set target_prune=100 and don't use quantize_tensor in forward operation.,i dont think that's right but for now ,sadly, i cant do more. as shown in the fig, my compression performance was much worse than performance shown in ur paper. the trend seems right, but value was not good. Apparently , in VGG16-cifar10 ,the energy consumption seems to be too high with the low accuracy, my lowest energy consumption is around 0.8+ the mode; size compression result just like the energy consumption that perform bad on vgg-cifar.

dalao help me plz!

zshzhang commented 2 years ago

seems that change the default pre-prune bound can get better pre-pruned model , on going