DingXiaoH / RepVGG

RepVGG: Making VGG-style ConvNets Great Again
MIT License
3.34k stars 435 forks source link

question about insert BN before QAT #98

Open zhaoxin111 opened 2 years ago

zhaoxin111 commented 2 years ago

In quantization,“We insert BN after the converted 3x3 conv layers because QAT with torch.quantization requires BN”.

I wonder why QAT must need BN after conv, if we don't have BN,just fuse_modules with conv, relu mode, Right?