NeuroBench / neurobench

Benchmark harness and baseline results for the NeuroBench algorithm track.
https://neurobench.readthedocs.io
Apache License 2.0
58 stars 12 forks source link

Activation sparsity #87

Closed tao-sun closed 1 year ago

tao-sun commented 1 year ago

The main changes:

  1. Move computation of activation sparsity into metrics.py.
  2. Delegate self.__net__()._neurolayers to get all the neuro layers, rather than a NeuroBenchNetwork class.
  3. Add a **hooks parameter to each of _datametrics function in metrics.py.
tao-sun commented 1 year ago

Fixes #69

jasonlyik commented 1 year ago

Changed PR base to dev rather than main

korneelf1 commented 1 year ago

Could you implement the construction of the hooks automatically similarly to the way the model loops through the layers for the connection sparsity?

tao-sun commented 1 year ago

Could you implement the construction of the hooks automatically similarly to the way the model loops through the layers for the connection sparsity?

Do you mean a list of neuro-layer types could be predefined? And we should check each layer in a model and add a hook to it if it is a neuro-layer.

korneelf1 commented 1 year ago

Could you implement the construction of the hooks automatically similarly to the way the model loops through the layers for the connection sparsity?

Do you mean a list of neuro-layer types could be predefined? And we should check each layer in a model and add a hook to it if it is a neuro-layer.

In the connection sparsity function, the function loops through the layers of the model and automatically computes the sparsity without requiring the user to create a list of the included layers. It would be nice if something similar is done for activation sparsity, where all neuron layers (.Leaky, .Synaptic,...) and activation functions (ReLU,...) are recognized and a hook is created automatically. An additional function eg add_hooks could allow the user to specify their custom neuron models or activation functions which are not detected automatically.

tao-sun commented 1 year ago

In the connection sparsity function, the function loops through the layers of the model and automatically computes the sparsity without requiring the user to create a list of the included layers. It would be nice if something similar is done for activation sparsity, where all neuron layers (.Leaky, .Synaptic,...) and activation functions (ReLU,...) are recognized and a hook is created automatically. An additional function eg add_hooks could allow the user to specify their custom neuron models or activation functions which are not detected automatically.

Two things I would like to discuss:

  1. Where is a good place to let user specify their own neuron layers? I think this could be in their own net (i.e. NeuroBenchModel.net()).
  2. Other than the three layers (Leaky, Synaptic, ReLU), do we know such layers now? A list of such layers will be an extensible list. This means that we might have to modify metrics.activation_sparsity() from time to time, which I think is not a good practice from software engineering's perspective.
tao-sun commented 1 year ago

Could you implement the construction of the hooks automatically similarly to the way the model loops through the layers for the connection sparsity?

Do you mean a list of neuro-layer types could be predefined? And we should check each layer in a model and add a hook to it if it is a neuro-layer.

In the connection sparsity function, the function loops through the layers of the model and automatically computes the sparsity without requiring the user to create a list of the included layers. It would be nice if something similar is done for activation sparsity, where all neuron layers (.Leaky, .Synaptic,...) and activation functions (ReLU,...) are recognized and a hook is created automatically. An additional function eg add_hooks could allow the user to specify their custom neuron models or activation functions which are not detected automatically.

By user, do you mean the person writing a file like dvs_gesture.py?

korneelf1 commented 1 year ago

In the connection sparsity function, the function loops through the layers of the model and automatically computes the sparsity without requiring the user to create a list of the included layers. It would be nice if something similar is done for activation sparsity, where all neuron layers (.Leaky, .Synaptic,...) and activation functions (ReLU,...) are recognized and a hook is created automatically. An additional function eg add_hooks could allow the user to specify their custom neuron models or activation functions which are not detected automatically.

Two things I would like to discuss:

1. Where is a good place to let user specify their own neuron layers?
   I think this could be in their own net (i.e. NeuroBenchModel.**net**()).

2. Other than the three layers (Leaky, Synaptic, ReLU), do we know such layers now?
   A list of such layers will be an extensible list. This means that we might have to modify metrics.activation_sparsity() from time to time, which I think is not a good practice from software engineering's perspective.

So the user (the person wanting to benchmark a model using NeuroBench) would start by loading the data, next define the model and wrapping it in a SNNTorchModel() wrapper (model = SNNTorchModel(net) ) . The init function of the wrapper would auto-detect the relevant neuron layers. This would create a list with all the hooks. Next, the user would have the option to use model.add_activations(custom_activations) and this function would add the hooks for these custom layers to the hook list in the model. This function should only append layers to the list when they are not yet in there, avoiding duplicates.

Other than the three layers I mentioned initially, we do already have implementations for many other neuron models and activation functions). The available neuron models on snnTorch are: https://snntorch.readthedocs.io/en/latest/snntorch.html#neuron-list Some of the activation functions in Pytorch are: https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity We should clearly state what layers will be auto-detected, this will warn the user to add their own custom activations if necessary.

korneelf1 commented 1 year ago

Thank you for the improvements! One issue with the .children method you are using now to find layers, is that it won't search within blocks (eg if your model is a nested set of nn.Sequential() blocks). If you look in the dev branch of this repository, you can look at the implementation of connection sparsity which recursively searches for leaf nodes. The recursive search method implemented there should be almost directly be usable by you (you will have to change the layer types to your desired layer types such as snn.Leaky). I am sorry for not clarifying that this implementation is so far only in the dev branch and not in the main branch (from which you based your current implementation).

tao-sun commented 1 year ago

Thank you for the improvements! One issue with the .children method you are using now to find layers, is that it won't search within blocks (eg if your model is a nested set of nn.Sequential() blocks). If you look in the dev branch of this repository, you can look at the implementation of connection sparsity which recursively searches for leaf nodes. The recursive search method implemented there should be almost directly be usable by you (you will have to change the layer types to your desired layer types such as snn.Leaky). I am sorry for not clarifying that this implementation is so far only in the dev branch and not in the main branch (from which you based your current implementation).

Fixed this issue and will you have a review?

korneelf1 commented 1 year ago

Thank you for the improvements! One issue with the .children method you are using now to find layers, is that it won't search within blocks (eg if your model is a nested set of nn.Sequential() blocks). If you look in the dev branch of this repository, you can look at the implementation of connection sparsity which recursively searches for leaf nodes. The recursive search method implemented there should be almost directly be usable by you (you will have to change the layer types to your desired layer types such as snn.Leaky). I am sorry for not clarifying that this implementation is so far only in the dev branch and not in the main branch (from which you based your current implementation).

Fixed this issue and will you have a review?

Hi, Thank you for fixing this issue, I will review the code ASAP!

korneelf1 commented 1 year ago

I have added some functionality to correctly compute for ANN and SNN models, I have pushed the changes to your branch, could you verify them?

korneelf1 commented 1 year ago

Due to the limited timeline available, I will pull my changes into the NeuroBench framework, I have tested the changes and they pass the tests. Please let us know if you find any important errors in my changes of your code.

tao-sun commented 1 year ago

I am not sure I understand it correctly. It seems that this feature is not integrated into the dev(i.e., Effective_MACs) branch.