Open HKLee2040 opened 4 years ago
If we applied haq on fused mobilenet v1, i.e., fuse convolutional and batch norm layer together, it seems very difficult to quantize such model in 1~8bits. Do you have any comment on such case?
This is really confusing me a lot. Did you try to fold the batch norm layer? What the performance is?
If we applied haq on fused mobilenet v1, i.e., fuse convolutional and batch norm layer together, it seems very difficult to quantize such model in 1~8bits. Do you have any comment on such case?