666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.2k stars 478 forks source link

AQ中的数据范围缩小问题 #90

Open xingyueye opened 2 years ago

xingyueye commented 2 years ago

Hi, 我注意到再dorefa的quantizer中 micronet/micronet/compression/quantization/wqaq/dorefa/quantize.py:43 关于activation部分的量化操作,在量化前先乘了0.1进行了数据缩放 output = torch.clamp(input * 0.1, 0, 1) # 特征A截断前先进行缩放(* 0.1),以减小截断误差 但是在量化/反量化操作之后,并没有再乘10回到原来的数据范围。这个问题怎么理解