micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
我是做嵌入式端模型部署的工作,对模型压缩比较感兴趣。像了解在模型压缩过程中一些需要注意的事项,比如混合通道、组卷积。为什么训练组卷积的weight_decay设置0?还有models/util_w_t_b_conv.py中实现了哪些算法?还有util_w_t_gap.py这个代码是做什么的?