genn-team / ml_genn

A library for deep learning with Spiking Neural Networks (SNN).
https://ml-genn.readthedocs.io
GNU Lesser General Public License v2.1
21 stars 6 forks source link

Issues with validation training and e-prop #80

Open neworderofjamie opened 8 months ago

neworderofjamie commented 8 months ago

I clearly wasn't thinking about e-prop when I added support for training with validation in #57 as currently gradients are being accumulated e.g. at https://github.com/genn-team/ml_genn/blob/master/ml_genn/ml_genn/compilers/eprop_compiler.py#L209 during validation and therefore will get applied at the end of the first training batch. In e-prop, the state needs to be fully reset (especially gradients) at the start of each training epoch. I think the cleanest solution is to fully reset state at the beginning of each training epoch. That way eligibility trace/adaptation variable/gradients accumulated during validation won't effect training but model will be fully adapted after a full epoch of training so validation is a bit more realistic

tnowotny commented 8 months ago

I still don't understand how to handle adaptation variables though. Resetting makes sense in some ways but it's sub-optimal if our intuition about a "working regime" away from reset values is right?

neworderofjamie commented 8 months ago

But incorporating more information about that regime from the validation data into the next training epoch seems like cheating

tnowotny commented 8 months ago

Yes, it could lead to overestimating validation performance. It would not mean actual overstating of accuracy as long as the test set stays separate. However, how to handle testing on the test set?