Open talenz opened 3 years ago
Hi,
training MobilenetV2 requires you to implement (signed) asymmetric quantization for activations. Since the last layer of the inverted Residual Bottleneck block does not have ReLU function, therefore, its activation is signed.
Thanks.
Thanks for your reply! Is it possible (how?) to use per-channel weight quantization in your APOT to boost the performance?
Dear talenz, how about your quantization result on mbv2?
@talenz
I use the official MobilenetV2 in the torchvision.models.
Are there any special tricks to train mobilenet_v2?