Open neworderofjamie opened 8 months ago
Attention: Patch coverage is 37.81513%
with 74 lines
in your changes are missing coverage. Please review.
Project coverage is 61.48%. Comparing base (
8822f90
) to head (ab24e11
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I am going to add support to the e-prop and inference compilers before I merge this but thought I'd open a PR for a sanity check. Basic design is:
Quantisation helpers
Take some percentile (99% of un-quantised weight distribution) and fit a fixed point format to it
Quantisation callbacks
Pulls weights, calculates quantised version using helper and pushes
Event prop compiler
g
variable for quantised version of the weight so, when checkpoints are loaded into inference compiler, quantised weights these will be usedgBack
to trainable weights to hold unquantised weight which gradients get applied toThis does assume each synapse group (in GeNN speak) has its own scaling factor - will need to confirm whether this is the case on Loihi. Training with Eventprop, 8-bit weights don't cause any problems with MNIST SHD example goes from XXX to 63.9% accuracy.
Quantised weight distribution (note quantisation artefacts in long tails):