666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.2k stars 478 forks source link

bn_fold的量化代码问题 #60

Closed Racha1992 closed 3 years ago

Racha1992 commented 3 years ago

请问在BNFold_Conv2d_Q 里边forward函数里边,在计算bias的时候: bias = reshape_to_bias(self.beta + (self.bias - batch_mean) * (self.gamma / torch.sqrt(batch_var + self.eps))) 为什么这里用的batch_mean和batch_var而不是self.running_mean和self.running_var呢?

Racha1992 commented 3 years ago
          bias = reshape_to_bias(self.beta - batch_mean  * (self.gamma / torch.sqrt(batch_var + self.eps)))# b融batch
        weight = self.weight * reshape_to_weight(self.gamma / torch.sqrt(self.running_var + self.eps))     # w融running

不太懂的地方是为什么bias融batch,weights融running?

666DZY666 commented 3 years ago

参考,相关资料_压缩_量化_QAT_High-Bit: Quantizing deep convolutional networks for efficient inference: A whitepaper