IntelLabs / distiller

Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Apache License 2.0
4.34k stars 799 forks source link

wrpn quantizer #530

Open wm901115nwpu opened 4 years ago

wm901115nwpu commented 4 years ago

I using alexnet_bn_wrpn in the imagenet dataset. But the result in epoch 36 is lower than Dorefa or PACT or fp32. This is my config file:

quantizers: wrpn_quantizer: class: WRPNQuantizer bits_activations: 8 bits_weights: 4 overrides:

Don't quantize first and last layer

  features.0:
    bits_weights: null
    bits_activations: null
  features.1:
    bits_weights: null
    bits_activations: null
  classifier.5:
    bits_weights: null
    bits_activations: null
  classifier.6:
    bits_weights: null
    bits_activations: null

lr_schedulers: training_lr: class: MultiStepLR milestones: [60, 75] gamma: 0.2

policies:

wm901115nwpu commented 4 years ago

And I can't find params setting about layer widden. Could you tell me how to do this setting?