This repository features a minimal implementation of the (Branch) Expressive Leaky Memory neuron in PyTorch. Notebooks to train and evaluate on NeuronIO are provided, as well as pre-trained models of various sizes.
conda env create -f elm_env.yml
conda activate elm_env
The models folder contains various sized Branch-ELM neuron models pre-trained on NeuronIO.
$d_m$ | 1 | 2 | 3 | 5 | 7 | 10 | 15 | 20 | 25 | 30 | 40 | 50 | 75 | 100 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
#params | 4601 | 4708 | 4823 | 5077 | 5363 | 5852 | 6827 | 8002 | 9377 | 10952 | 14702 | 19252 | 34127 | 54002 |
AUC | 0.9437 | 0.9582 | 0.9558 | 0.9757 | 0.9827 | 0.9878 | 0.9915 | 0.9922 | 0.9926 | 0.9929 | 0.9934 | 0.9934 | 0.9938 | 0.9935 |
We also include a best effort trained ELM neuron that achieves 0.9946 AUC with $d_m=100$.
The src folder contains the implementation and training/evaluation utilities.
Note: the PyTorch implementation seems to be about 2x slower than the jax version unfortunately.
Running the NeuronIO related code requires downloading the dataset first (~115GB).
Running the SHD related code is possible without seperately downloading the dataset (~0.5GB).
Running the LRA training/evaluation is not provided at the moment.
If you like what you find, and use an ELM variant or the SHD-Adding dataset, please consider citing us:
[1] Spieler, A., Rahaman, N., Martius, G., Schölkopf, B., & Levina, A. (2023). The ELM Neuron: an Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks. arXiv preprint arXiv:2306.16922.