666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
MIT License
2.2k stars 478 forks source link

关于注释和论文 #9

Closed polarisZhao closed 4 years ago

polarisZhao commented 4 years ago

您好, 非常感谢您的分享。不过比较遗憾的是你的代码中注释很少,也没有相关的论文。能否把代码相关的论文附上, 以便大家学习。谢谢

666DZY666 commented 4 years ago

WbWtAb文件夹中的代码注释相对详细一些,其他代码仅注释了一些比较关键的地方; 相关论文随后会附上,谢谢

zyc4me commented 4 years ago

同求一些参考论文 来学习,只看代码不理解。。 另外,作者大大,实在是太厉害了 ,强强👍

JensenHJS commented 4 years ago

希望作者可以尽快分享一些跟代码里思想一样的论文,期待,感谢

666DZY666 commented 4 years ago

已补充至readme