Closed liyue2ppy closed 2 years ago
Yes. I dont remember very well, so it is a bit difficult to give an exact figure. But the accuracy loss is unacceptable.
On Apr 27, 2021, at 11:46, liyue2ppy @.**@.>> wrote: Hi, did you try to use 8 bit for first conv and fc for quantized network training? Will the accuracy drop very much?
I see the initialization of the step parameter for activation in this repo is different from the paper. Why don't use the paper 's method? Is it invalid?
I see the initialization of the step parameter for activation in this repo is different from the paper. Why don't use the paper 's method? Is it invalid?
Sorry to reply so late. I forgot it (T = T)
I am doing more experiments, and the experiment setup is the same as the paper. You can visit https://github.com/zhutmost/neuralzip for more details.
Hi, did you try to use 8 bit for first conv and fc for quantized network training? Will the accuracy drop very much?