Currently normalisation for learning requires a State Variable to be passed out of a Weight Update, remapped using a connection, then passed back in. This is both inefficient and opaque for something that is a commonly required method.
Instead the normalisation should be defined as an attribute for Learning StateVariables @norm={'none','post,'pre'} to define no normalisation, postsynaptic normalisation or presynaptic normalisation respectively. If @norm is absent the 'none' is assumed.
Currently normalisation for learning requires a State Variable to be passed out of a Weight Update, remapped using a connection, then passed back in. This is both inefficient and opaque for something that is a commonly required method.
Instead the normalisation should be defined as an attribute for Learning StateVariables @norm={'none','post,'pre'} to define no normalisation, postsynaptic normalisation or presynaptic normalisation respectively. If @norm is absent the 'none' is assumed.
Any comments?