official-stockfish / nnue-pytorch

Stockfish NNUE (Chess evaluation) trainer in Pytorch
GNU General Public License v3.0
320 stars 99 forks source link

Quantization losses in the L1 biases of L1 3072 #279

Open Viren6 opened 5 months ago

Viren6 commented 5 months ago

The average absolute L1 bias of the L1 3072 master net is 186531 / 3072 = 60.72

In contrast, the average absolute L1 bias of the L1 128 small net is 272474 / 128 = 2128.7

What appears to have happened, is that as the L1 size has increased the average absolute bias of each neuron has decreased to the point at which we are now encountering quantization losses in the L1 biases.

EDIT: It seems applying an adjustment of 1 randomly to each bias (instead of all of them) results in a 0 Elo result. This situation is closer to quantization, will verify the quantization loss properly once I receive the model in a few days