ZhaoqxCN / PYNQ-CNN-ATTEMPT

Some attempts to build CNN on PYNQ.
MIT License
24 stars 6 forks source link

parameter quant_scale in mnist #3

Closed leankemski closed 2 years ago

leankemski commented 4 years ago

Can you tell me why the quant_scale=116 in FPGA_CNN_INT8.ipynb? I changed it to 64 or other numbers the accuracy would only achieve 40%. Thank you.

ZhaoqxCN commented 4 years ago

The quant_scale is to enlarge the weight of each layer to -127 - 128, so as to maximize the utilization of int8 accuracy. It is calculated based on the trained cnn weights.

leankemski commented 4 years ago

Thank you so much.