Closed leankemski closed 2 years ago
Can you tell me why the quant_scale=116 in FPGA_CNN_INT8.ipynb? I changed it to 64 or other numbers the accuracy would only achieve 40%. Thank you.
The quant_scale is to enlarge the weight of each layer to -127 - 128, so as to maximize the utilization of int8 accuracy. It is calculated based on the trained cnn weights.
Thank you so much.
Can you tell me why the quant_scale=116 in FPGA_CNN_INT8.ipynb? I changed it to 64 or other numbers the accuracy would only achieve 40%. Thank you.