666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.22k stars 478 forks source link

bn融合训练时就没有bn了吗相当于? #74

Open ArtyZe opened 3 years ago

ArtyZe commented 3 years ago

您好,我看您代码,理解着是只要开启了bn融合后,训练时就相当于没有bn参数的更新了,就用之前的rolling mean 和variance了吗?

666DZY666 commented 3 years ago

qat中,若加载预训练浮点模型参数,bn则在其running参数基础上继续更新;在最近更新的qaft中,会冻结。 bn融合指的是,量化训练时bn参数依然存在,但不做bn运算,而是(qat中)更新后融合到前面的conv中;量化推理时,bn融合后完全消除。 可查看相关代码。