Open zhaoxin111 opened 2 years ago
In quantization,“We insert BN after the converted 3x3 conv layers because QAT with torch.quantization requires BN”.
I wonder why QAT must need BN after conv, if we don't have BN,just fuse_modules with conv, relu mode, Right?
In quantization,“We insert BN after the converted 3x3 conv layers because QAT with torch.quantization requires BN”.
I wonder why QAT must need BN after conv, if we don't have BN,just fuse_modules with conv, relu mode, Right?