AojunZhou / Incremental-Network-Quantization

Caffe Implementation for Incremental network quantization
Other
191 stars 74 forks source link

Bias term in the Convolution param #7

Open TwistedfateKing opened 7 years ago

TwistedfateKing commented 7 years ago

You only quantized Weight parameter and skipped Bias parameter.

have you ever done experiments about converting Bias parameter from float to fixed-point?

and

What do you think about result of INQ(weight only) + 8-bit quantized (bias)?

AojunZhou commented 7 years ago

@TwistedfateKing thanks, I don't do any experiments about bias quantization, I think you can convert bias from float-point into 8-bits, or remove bias directly.