DingXiaoH / RepVGG

RepVGG: Making VGG-style ConvNets Great Again
MIT License
3.3k stars 433 forks source link

question about insert BN before QAT #98

Open zhaoxin111 opened 1 year ago

zhaoxin111 commented 1 year ago

In quantization,“We insert BN after the converted 3x3 conv layers because QAT with torch.quantization requires BN”.

I wonder why QAT must need BN after conv, if we don't have BN,just fuse_modules with conv, relu mode, Right?