666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.2k stars 478 forks source link

你好,gc_prune.py里面对分组卷积后的model参数保存,可否提供下代码 #18

Open huangzicheng opened 4 years ago

huangzicheng commented 4 years ago

Group =s , 分组操作使得卷积核为原来的channel/s, 代码里面的裁剪通道还是channel, 比如(input=256,output=256, group=2)其实只有128个卷积通道, 代码里面的out of range

205418367 commented 4 years ago

我也需要提取量化后模型参数的代码

AIprogrammer commented 4 years ago

Group =s , 分组操作使得卷积核为原来的channel/s, 代码里面的裁剪通道还是channel, 比如(input=256,output=256, group=2)其实只有128个卷积通道, 代码里面的out of range

Hi, 请问分组卷积这里有比较好的解决方案吗?