zhutmost / lsq-net

Unofficial implementation of LSQ-Net, a neural network quantization framework
MIT License
269 stars 40 forks source link

8 bit for first conv and fc #11

Closed liyue2ppy closed 2 years ago

liyue2ppy commented 3 years ago

Hi, did you try to use 8 bit for first conv and fc for quantized network training? Will the accuracy drop very much?

zhutmost commented 3 years ago

Yes. I dont remember very well, so it is a bit difficult to give an exact figure. But the accuracy loss is unacceptable.

On Apr 27, 2021, at 11:46, liyue2ppy @.**@.>> wrote: Hi, did you try to use 8 bit for first conv and fc for quantized network training? Will the accuracy drop very much?

liyue2ppy commented 3 years ago

I see the initialization of the step parameter for activation in this repo is different from the paper. Why don't use the paper 's method? Is it invalid?

zhutmost commented 2 years ago

I see the initialization of the step parameter for activation in this repo is different from the paper. Why don't use the paper 's method? Is it invalid?

Sorry to reply so late. I forgot it (T = T)

I am doing more experiments, and the experiment setup is the same as the paper. You can visit https://github.com/zhutmost/neuralzip for more details.