A small note:
The forward function you define is already implemented in the snnTorchmodel wrapper. The version in the wrapper is a bit more robust for models with eg synaptic neurons, they seem not to reset properly using the utils.reset(). During testing, it showed that this method filled up the RAM memory when using second order models. Could it be possible to wrap in the SNNTorchModel wrapper?
A small note: The forward function you define is already implemented in the snnTorchmodel wrapper. The version in the wrapper is a bit more robust for models with eg synaptic neurons, they seem not to reset properly using the utils.reset(). During testing, it showed that this method filled up the RAM memory when using second order models. Could it be possible to wrap in the SNNTorchModel wrapper?