the steps i have done, used all the default parameters:
use mode0 pre-train & pre-prune the net. GOT pre_trained_0_0.pth and pre_trained_0_0_20.pth~pre_trained_0_0_80.pth(lenet on minst), and series of pre_trained_1_1 / 2_1 (mobilenet and VGG16 on cifar10)
use "EMOMC --mode --net --data --flow --code "independently,(popsize=20,generation=250) and got EA results
transform the unit to plot and check data
model VGG use the upper method in the code (default)
i'm a noob and cant find my PC's MAC&W&R efficiency so i didn't change it in energy computation .
the results i got :
plz ignore the quoted numbers that i got them by set target_prune=100 and don't use quantize_tensor in forward operation.,i dont think that's right but for now ,sadly, i cant do more.
as shown in the fig, my compression performance was much worse than performance shown in ur paper.
the trend seems right, but value was not good. Apparently , in VGG16-cifar10 ,the energy consumption seems to be too high with the low accuracy, my lowest energy consumption is around 0.8+
the mode; size compression result just like the energy consumption that perform bad on vgg-cifar.
the steps i have done, used all the default parameters:
model VGG use the upper method in the code (default) i'm a noob and cant find my PC's MAC&W&R efficiency so i didn't change it in energy computation .
the results i got : plz ignore the quoted numbers that i got them by set target_prune=100 and don't use quantize_tensor in forward operation.,i dont think that's right but for now ,sadly, i cant do more. as shown in the fig, my compression performance was much worse than performance shown in ur paper. the trend seems right, but value was not good. Apparently , in VGG16-cifar10 ,the energy consumption seems to be too high with the low accuracy, my lowest energy consumption is around 0.8+ the mode; size compression result just like the energy consumption that perform bad on vgg-cifar.
dalao help me plz!