AojunZhou / Incremental-Network-Quantization

Caffe Implementation for Incremental network quantization
Other
191 stars 74 forks source link

How to save the model with low-precision? #41

Open fantexibaba opened 5 years ago

fantexibaba commented 5 years ago

I have validated the quantification function of INQ.It works very well.But the final stored model is still a 32-bit floating-point number.I want to know how to get the low-precision model in the training process and look at it. Thanks a lot.

ghost commented 4 years ago

Hi @fantexibaba @AojunZhou Have you solve this problem?