666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.2k stars 478 forks source link

Concat量化问题 #89

Open xingyueye opened 2 years ago

xingyueye commented 2 years ago

大佬你好,我看到代码中concat层的量化被注释了。请问下后续会继续支持吗。目前的话是否会对整体网络量化有影响

666DZY666 commented 2 years ago

可以参考add量化自己加一下concat量化。concat量化后应该不会对网络性能有太大影响。