allenbai01 / ProxQuant

ProxQuant: Quantized Neural Networks via Proximal Operators
MIT License
28 stars 4 forks source link

How to run multi-bit quantization? #2

Open haolibai opened 5 years ago

haolibai commented 5 years ago

Thanks for sharing the code. A good paper! May I know how to run multi-bit quantization? Do you have the script or code for it? And have you tried multi-bit quantization on image classification?

Thanks!

allenbai01 commented 5 years ago

Hi Haoli-

Thank you for your interest in our work, and sorry for the delayed response. I am attaching the code for multi-bit LSTM quantization. Please extract the code into LanguageModel/ and download the dataset accordingly. Then execute train-sgd-w-ptb.sh to start training quantized LSTMs on PennTreebank, or look at the other training scripts for more options. (I've included the pretrained full-precision models in the zip file, so no need to do that.)

Re multi-bit quant on image classification. I haven't tried more than 2 bits on image problems (as ternarization already gives pretty good results on CIFAR-10.) For ImageNet though it would certainly make sense to look at more bits.

Best, Yu

LanguageModel.7z https://drive.google.com/file/d/1pAK4gJcZh3YACW9rcqw3oGLIGK4WbFgb/view?usp=drive_web

On Tue, Aug 20, 2019 at 5:44 AM Haoli Bai notifications@github.com wrote:

Thanks for sharing the code. A good paper! May I know how to run multi-bit quantization? Do you have the script or code for it? And have you tried multi-bit quantization on image classification?

Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/allenbai01/ProxQuant/issues/2?email_source=notifications&email_token=ABI2P3AEBZSVVSIM77LMFFTQFPRMTA5CNFSM4INV4U2KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HGHRBTA, or mute the thread https://github.com/notifications/unsubscribe-auth/ABI2P3BSQK5MKMZV65DHL73QFPRMTANCNFSM4INV4U2A .

haolibai commented 5 years ago

Thank you very much for your help!