Hi, @yihui-he ,Thank you for your code,
and I have two questions to ask you:
I used cifar10 dataset and its VGG model to run your code, python3 train.py -action c3 -caffe 0,
but when runing functuon R3 which includes spatial decomposition, channel decomposition, channel pruning ,CPU memory usage is very high, and all of the process is very long, for example, when runing to conv3,
channel_decomposition 2776.0871226787567, spatial_decomposition 2395.623935699463, cov4 and conv5 is the same as conv3, they are all time consuming, sometime Memoryerror occurred.
i want to know if that is normal? and How much CPU memory should I need?
BTW, my cpu memory is 64G
when using pretrained model to run your code for compression, the required dataset is train data or test data? In your code, mem_bn_vgg.prototxt was generated, and in that, the batch_size is equal to the batch_size in the test phase, so i want to know the extracted feature map in feats_dict is from train data or test data?
Hi, @yihui-he ,Thank you for your code, and I have two questions to ask you:
I used cifar10 dataset and its VGG model to run your code, python3 train.py -action c3 -caffe 0, but when runing functuon R3 which includes spatial decomposition, channel decomposition, channel pruning ,CPU memory usage is very high, and all of the process is very long, for example, when runing to conv3, channel_decomposition 2776.0871226787567, spatial_decomposition 2395.623935699463, cov4 and conv5 is the same as conv3, they are all time consuming, sometime Memoryerror occurred. i want to know if that is normal? and How much CPU memory should I need? BTW, my cpu memory is 64G
when using pretrained model to run your code for compression, the required dataset is train data or test data? In your code, mem_bn_vgg.prototxt was generated, and in that, the batch_size is equal to the batch_size in the test phase, so i want to know the extracted feature map in feats_dict is from train data or test data?
Looking forward to your reply, thank you!