A harness for running evaluations on
NeuroBench <https://neurobench.ai>
__ algorithm benchmarks.
NeuroBench is a community-driven project, and we welcome further
development from the community. If you are interested in developing
extensions to features, programming frameworks, or metrics and tasks,
please see the Contributing Guidelines <https://neurobench.readthedocs.io/en/latest/contributing.html>
__.
NeuroBench contains the following sections:
.. list-table:: :widths: 20 60
neurobench.benchmarks <https://neurobench.readthedocs.io/en/latest/neurobench.benchmarks.html>
__
neurobench.datasets <https://neurobench.readthedocs.io/en/latest/neurobench.datasets.html>
__
neurobench.models <https://neurobench.readthedocs.io/en/latest/neurobench.models.html>
__
neurobench.preprocessing <https://neurobench.readthedocs.io/en/latest/neurobench.preprocessing.html>
__
neurobench.postprocessing <https://neurobench.readthedocs.io/en/latest/neurobench.postprocessing.html>
__
Install from PyPI:
::
pip install neurobench
The following benchmarks are currently available:
v1.0 benchmarks
- Keyword Few-shot Class-incremental Learning (FSCIL)
- Event Camera Object Detection
- Non-human Primate (NHP) Motor Prediction
- Chaotic Function Prediction
Additional benchmarks
Example benchmark scripts can be found under the neurobench/examples
folder.
(https://github.com/NeuroBench/neurobench/tree/main/neurobench/examples/ <https://github.com/NeuroBench/neurobench/tree/main/neurobench/examples/>
__)
In general, the design flow for using the framework is as follows:
NeuroBenchModel
.Benchmark
and run()
.Documentation for the framework interfaces can found in the API Overview <https://neurobench.readthedocs.io/en/latest/api.html>
__.
If you clone the repo directly for development, poetry <https://pypi.org/project/poetry/>
__
can be used to maintain a virtualenv consistent with a deployment environment. In the
root directory run:
::
pip install poetry poetry install
Poetry requires python >=3.9. Installation should not take more than a few minutes.
End-to-end examples can be run from the poetry environment. As a demo, try the Google Speech Commands keyword classification benchmark:
::
poetry run python neurobench/examples/gsc/benchmark_ann.py
poetry run python neurobench/examples/gsc/benchmark_snn.py
These demos should download the dataset, then run in a couple minutes. Other baseline result scripts and notebook
tutorials are available in the neurobench/examples
folder.
NeuroBench is a collaboration between industry and academic engineers
and researchers. This framework is currently maintained by Jason Yik <https://www.linkedin.com/in/jasonlyik/>
, Noah Pacik-Nelson <https://www.linkedin.com/in/noah-pacik-nelson/>
, and
Korneel Van den Berghe <https://www.linkedin.com/in/korneel-van-den-berghe/>
__, and
there have been technical contributions from many others. A
non-exhaustive list includes Gregor Lenz, Denis Kleyko, Younes
Bouhadjar, Paul Hueber, Vincent Sun, Biyan Zhou, George Vathakkattil
Joseph, Douwe den Blanken, Maxime Fabre, Shenqi Wang, Guangzhi Tang,
Anurag Kumar Mishra, Soikat Hasan Ahmed, Benedetto Leto, Aurora Micheli,
Tao Sun.
If you are interested in helping to build this framework, please see the
Contribution Guidelines <https://neurobench.readthedocs.io/en/latest/contributing.html>
__.
If you use this framework in your research, please cite the following preprint article:
::
@misc{yik2024neurobench, title={NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems}, author={Jason Yik and Korneel Van den Berghe and Douwe den Blanken and Younes Bouhadjar and Maxime Fabre and Paul Hueber and Denis Kleyko and Noah Pacik-Nelson and Pao-Sheng Vincent Sun and Guangzhi Tang and Shenqi Wang and Biyan Zhou and Soikat Hasan Ahmed and George Vathakkattil Joseph and Benedetto Leto and Aurora Micheli and Anurag Kumar Mishra and Gregor Lenz and Tao Sun and Zergham Ahmed and Mahmoud Akl and Brian Anderson and Andreas G. Andreou and Chiara Bartolozzi and Arindam Basu and Petrut Bogdan and Sander Bohte and Sonia Buckley and Gert Cauwenberghs and Elisabetta Chicca and Federico Corradi and Guido de Croon and Andreea Danielescu and Anurag Daram and Mike Davies and Yigit Demirag and Jason Eshraghian and Tobias Fischer and Jeremy Forest and Vittorio Fra and Steve Furber and P. Michael Furlong and William Gilpin and Aditya Gilra and Hector A. Gonzalez and Giacomo Indiveri and Siddharth Joshi and Vedant Karia and Lyes Khacef and James C. Knight and Laura Kriener and Rajkumar Kubendran and Dhireesha Kudithipudi and Yao-Hong Liu and Shih-Chii Liu and Haoyuan Ma and Rajit Manohar and Josep Maria Margarit-Taulé and Christian Mayr and Konstantinos Michmizos and Dylan Muir and Emre Neftci and Thomas Nowotny and Fabrizio Ottati and Ayca Ozcelikkale and Priyadarshini Panda and Jongkil Park and Melika Payvand and Christian Pehle and Mihai A. Petrovici and Alessandro Pierro and Christoph Posch and Alpha Renner and Yulia Sandamirskaya and Clemens JS Schaefer and André van Schaik and Johannes Schemmel and Samuel Schmidgall and Catherine Schuman and Jae-sun Seo and Sadique Sheik and Sumit Bam Shrestha and Manolis Sifalakis and Amos Sironi and Matthew Stewart and Kenneth Stewart and Terrence C. Stewart and Philipp Stratmann and Jonathan Timcheck and Nergis Tömen and Gianvito Urgese and Marian Verhelst and Craig M. Vineyard and Bernhard Vogginger and Amirreza Yousefzadeh and Fatima Tuz Zohora and Charlotte Frenkel and Vijay Janapa Reddi}, year={2024}, eprint={2304.04640}, archivePrefix={arXiv}, primaryClass={cs.AI} }