mit-han-lab / haq

[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
https://hanlab.mit.edu/projects/haq/
MIT License
365 stars 84 forks source link

fused mobilenet v1 #8

Open HKLee2040 opened 4 years ago

HKLee2040 commented 4 years ago

If we applied haq on fused mobilenet v1, i.e., fuse convolutional and batch norm layer together, it seems very difficult to quantize such model in 1~8bits. Do you have any comment on such case?

densechen commented 3 years ago

This is really confusing me a lot. Did you try to fold the batch norm layer? What the performance is?