yhhhli / APoT_Quantization

PyTorch implementation for the APoT quantization (ICLR 2020)
265 stars 51 forks source link

the precision a4w4 of training MobilenetV2 is nearly 0 #17

Open talenz opened 3 years ago

talenz commented 3 years ago

I use the official MobilenetV2 in the torchvision.models.

Are there any special tricks to train mobilenet_v2?

yhhhli commented 3 years ago

Hi,

training MobilenetV2 requires you to implement (signed) asymmetric quantization for activations. Since the last layer of the inverted Residual Bottleneck block does not have ReLU function, therefore, its activation is signed.

Thanks.

talenz commented 3 years ago

Thanks for your reply! Is it possible (how?) to use per-channel weight quantization in your APOT to boost the performance?

Yoggiefu commented 1 year ago

Dear talenz, how about your quantization result on mbv2?

Yoggiefu commented 1 year ago

@talenz