fpgasystems / spooNN

FPGA-based neural network inference project with an end-to-end approach (from training to implementation to deployment)
GNU Affero General Public License v3.0
255 stars 73 forks source link

factorA and factorB in network params #36

Open mvsanjaya opened 4 years ago

mvsanjaya commented 4 years ago

I am not able to understand what these factorA and factorB params are in the trained network. Can someone provide a hint ..

JiaMingLin commented 3 years ago

I think they are probably re-scaling from current layer to next layer. You can refer to the paper from google, "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference", section two. This blog might be help you also https://medium.com/@karanbirchahal/how-to-quantise-an-mnist-network-to-8-bits-in-pytorch-no-retraining-required-from-scratch-39f634ac8459