The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly available on a non-commercial basis. Copyright of the model is maintained by the developers, and the model is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International Public License
This is the released version 2.0 (Mar 15, 2020) for the tool.
This V2.0 introduces estimation for on-chip training compute-in-memory (CIM) accelerator; while the previous V1.1 supported inference engine design only.
1. Include comprehensive hardware for on-chip training, to support feed-forward, error-calculation,
weight-gradient-calculation and weight-update.
2. Introduce more non-ideal properties of synaptic devices (in addition to V1.1):
nonlinearity and asysmmetry, cycle-to-cycle and device-to-device variation in weight-update.
_For estimation of inference engine, please visit released V1.3 DNN+NeuroSim V1.3_
_For improved version of on-chip training accelerator with more design options, please visit released V2.1 DNN+NeuroSim V2.1_
In V2.0, we currently only support Pytorch wrapper, where users are able to define network structures, parameter precisions and hardware non-ideal properties. With the integrated NeuroSim which takes real traces from wrapper, the framework can support hierarchical organization from device level to circuit level, to chip level and to algorithm level, enabling instruction-accurate evaluation on both accuracy and hardware performance of on-chip training accelerator.
The default example is VGG-8 for CIFAR-10 in this framework:
Due to additional functions (of non-ideal properties) being implemented in the framework, please expect ~12 hours simulation time for whole training process (default network VGG-8 for CIFAR-10, with 256 epochs).
Developers: Xiaochen Peng, Shanshi Huang.
This research is supported by NSF CAREER award, NSF/SRC E2CDA program, and ASCENT, one of the SRC/DARPA JUMP centers.
If you use the tool or adapt the tool in your work or publication, you are required to cite the following reference:
X. Peng, S. Huang, H. Jiang, A. Lu and S. Yu, ※DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for Training, § IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, doi: 10.1109/TCAD.2020.3043731, 2020.
X. Peng, S. Huang, Y. Luo, X. Sun and S. Yu, ※DNN+NeuroSim: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators with Versatile Device Technologies, § IEEE International Electron Devices Meeting (IEDM), 2019.
If you have logistic questions or comments on the model, please contact Prof. Shimeng Yu, and if you have technical questions or comments, please contact Xiaochen Peng or Shanshi Huang.
Documents/DNN NeuroSim V2.0 Manual.pdf
Documents/Nonlinearity-NormA.htm
Documents/nonlinear_fit.m
Training_pytorch
Training_pytorch/NeuroSIM
Get the tool from GitHub
git clone https://github.com/neurosim/DNN_NeuroSim_V2.0.git
Set up hardware parameters in NeuroSim Core and compile the Code
make
Set up hardware constraints in Python wrapper (train.py)
Run Pytorch wrapper (integrated with NeuroSim)
A list of simulation results are expected as below:
input_activity.csv
weight_dist.csv
delta_dist.csv
PythonWrapper_Output.csv
NeuroSim_Output.csv
NeuroSim_Results_Each_Epoch/NeuroSim_Breakdown_Epoch_0.csv
NeuroSim_Results_Each_Epoch/NeuroSim_Breakdown_Epoch_1.csv ... ...
NeuroSim_Results_Each_Epoch/NeuroSim_Breakdown_Epoch_256.csv
For the usage of this tool, please refer to the user manual.