NeuroBench / neurobench

Benchmark harness and baseline results for the NeuroBench algorithm track.
https://neurobench.readthedocs.io
Apache License 2.0
52 stars 12 forks source link

Activation Sparsity #69

Closed jasonlyik closed 11 months ago

jasonlyik commented 1 year ago

Sparsity of activations in a model, at each timestep, for each sample in the testing set. Count the number of zeroes and divide by total number of possible activations (0.0 refers to no sparsity/all neurons activated, 1.0 refers to full sparsity/no neurons activated). A possible activation is any neuron activation that can theoretically happen.

tao-sun commented 12 months ago

I am trying to implement this metric and I have some questions about some implementation details.

jasonlyik commented 12 months ago
tao-sun commented 12 months ago

Conceptually, the activation sparsity (i.e. sparseness of neural activity in [1]) can be defined as average firing probability per time-step per neuron.

Following this definition, if we have the total_neuron_number of a SNN model and the _total_number_ofspikes and _number_of_timesteps for each sample, we can compute this metric as

    sparsity = total_number_of_spikes / (total_neuron_number * number_of_time_steps)

With hooks, the steps can be as followings:

  1. Add an abstract class named NeuroBenchNetwork, inherited from nn.Module , with a method named \_neurolayers__ to return all the neuron layers.

  2. Add a method in the class of NeuroBenchModel to return all the neuron layers (i.e. all the PyTorch layers that emit spikes). It can call _NeuroBenchNetwork.__neuro_layers_ () to get all the neuron layers.

  3. Define a class of Hook to collect all the output of such layers

class Hook():
    def __init__(self, module):
    self.outputs = []
        self.hook = module.register_forward_hook(self.hook_fn)

    def hook_fn(self, module, input, output):
        self.outputs.append(output[0])  # usually output[0] are spikes

    def close(self):
        self.hook.remove()
  1. For each layer returned in step 1, register a forward hook (i.e. create a Hook object with each layer) in Benchmark.

  2. After inference of a batch in Benchmark.run(), iterate each hook so that we can get _total_number_ofspikes (number of output elements that are not zeros) and _totalneurons(numbers total output elements, equals to _total_neuronnumber * _number_of_timesteps).

[1] Yin, Bojian, Federico Corradi, and Sander M. Bohté. "Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks." Nature Machine Intelligence 3.10 (2021): 905-913.