666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.2k stars 478 forks source link

关于修复ioa量化训练会出现nan的bug #68

Open Racha1992 opened 3 years ago

Racha1992 commented 3 years ago

楼主好 关于修复ioa量化训练会出现nan的bug,在说明里说是per-channel的min-max error导致。我对比了一下代码,发现除了加上copy函数,没有什么本质的不同。我猜测这么做是想梯度停在这里不要再传过去,类似于detach()的作用。但是我打印了之前版本的min-max,也都是不需要梯度的。 所以,能麻烦你详细讲一下之前的nan是怎么导致的吗?非常感谢! 如果方便,也可以加一下我的微信,tju641397211 再次谢谢~~~

666DZY666 commented 3 years ago

这里,https://github.com/666DZY666/micronet/commit/45136477a61528f171351f70763d5334cb31aebe#diff-9673a485be40841c87237bb5f5dc0b0e718116160d9898089e9cefb662679524R24

Racha1992 commented 3 years ago

这里,4513647#diff-9673a485be40841c87237bb5f5dc0b0e718116160d9898089e9cefb662679524R24

这两个不也是等价的吗,有什么本质不一样的