<img alt="Coverity Scan Build Status" src="https://scan.coverity.com/projects/22512/badge.svg"/>
NNtrainer is a Software Framework for training Neural Network models on devices.
NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop a Software Framework to train neural network models on embedded devices which have relatively limited resources. Rather than training whole layers of a network from the scratch, NNtrainer finetunes the neural network model on device with user data for the personalization.
Even if NNtrainer runs on device, it provides full functionalities to train models and also utilizes limited device resources efficiently. NNTrainer is able to train various machine learning algorithms such as k-Nearest Neighbor (k-NN), Neural Networks, Logistic Regression, Reinforcement Learning algorithms, Recurrent network and more. We also provide examples for various tasks such as Few-shot learning, ResNet, VGG, Product Rating and more will be added. All of these were tested on Samsung Galaxy smart phone with Android and PC (Ubuntu 18.04/20.04).
A New Frontier of AI: On-Device AI Training and Personalization , ICSE-SEIP, 2024
NNTrainer: Light-Weight On-Device Training Framework , arXiv, 2022
Open Source On-Device AI SW Platform , Samsung Developer Conference 2023 (Korean)
NNTrainer: Personalize neural networks on devices! , Samsung Developer Conference 2021
NNTrainer: "On-device learning" , Samsung AI Forum 2021
Tizen | Ubuntu | Android/NDK Build | |
---|---|---|---|
6.0M2 and later | 18.04 | 9/P | |
arm | Available | Ready | |
arm64 | Available | ||
x64 | Ready | ||
x86 | N/A | N/A | |
Publish | Tizen Repo | PPA | |
API | C (Official) | C/C++ | C/C++ |
Instructions for installing NNTrainer.
Introductions for creating your own model.
Instructions for preparing NNTrainer for execution
NNTrainer examples for a variety of networks
This component defines layers which consist of a neural network model. Layers have their own properties to be set.
Keyword | Layer Class Name | Description |
---|---|---|
conv1d | Conv1DLayer | Convolution 1-Dimentional Layer |
conv2d | Conv2DLayer | Convolution 2-Dimentional Layer |
pooling2d | Pooling2DLayer | Pooling 2-Dimentional Layer. Support average / max / global average / global max pooling |
flatten | FlattenLayer | Flatten layer |
fully_connected | FullyConnectedLayer | Fully connected layer |
input | InputLayer | Input Layer. This is not always required. |
batch_normalization | BatchNormalizationLayer | Batch normalization layer |
layer_normalization | LayerNormalizationLayer | Layer normalization layer |
activation | ActivationLayer | Set by layer property |
addition | AdditionLayer | Add input input layers |
attention | AttentionLayer | Attenstion layer |
centroid_knn | CentroidKNN | Centroid K-nearest neighbor layer |
concat | ConcatLayer | Concatenate input layers |
multiout | MultiOutLayer | Multi-Output Layer |
backbone_nnstreamer | NNStreamerLayer | Encapsulate NNStreamer layer |
backbone_tflite | TfLiteLayer | Encapsulate tflite as a layer |
permute | PermuteLayer | Permute layer for transpose |
preprocess_flip | PreprocessFlipLayer | Preprocess random flip layer |
preprocess_l2norm | PreprocessL2NormLayer | Preprocess simple l2norm layer to normalize |
preprocess_translate | PreprocessTranslateLayer | Preprocess translate layer |
reshape | ReshapeLayer | Reshape tensor dimension layer |
split | SplitLayer | Split layer |
dropout | DropOutLayer | Dropout Layer |
embedding | EmbeddingLayer | Embedding Layer |
positional_encoding | PositionalEncodingLayer | Positional Encoding Layer |
rnn | RNNLayer | Recurrent Layer |
rnncell | RNNCellLayer | Recurrent Cell Layer |
gru | GRULayer | Gated Recurrent Unit Layer |
grucell | GRUCellLayer | Gated Recurrent Unit Cell Layer |
lstm | LSTMLayer | Long Short-Term Memory Layer |
lstmcell | LSTMCellLayer | Long Short-Term Memory Cell Layer |
zoneoutlstmcell | ZoneoutLSTMCellLayer | Zoneout Long Short-Term Memory Cell Layer |
time_dist | TimeDistLayer | Time distributed Layer |
multi_head_attention | MultiHeadAttentionLayer | Multi Head Attention Layer |
NNTrainer Provides
Keyword | Optimizer Name | Description |
---|---|---|
sgd | Stochastic Gradient Decent | - |
adam | Adaptive Moment Estimation | - |
Keyword | Learning Rate | Description |
---|---|---|
exponential | exponential learning rate decay | - |
constant | constant learning rate | - |
step | step learning rate | - |
NNTrainer provides
Keyword | Class Name | Description |
---|---|---|
cross_sigmoid | CrossEntropySigmoidLossLayer | Cross entropy sigmoid loss layer |
cross_softmax | CrossEntropySoftmaxLossLayer | Cross entropy softmax loss layer |
constant_derivative | ConstantDerivativeLossLayer | Constant derivative loss layer |
mse | MSELossLayer | Mean square error loss layer |
kld | KLDLossLayer | Kullback-Leibler Divergence loss layer |
NNTrainer provides
Keyword | Loss Name | Description |
---|---|---|
tanh | tanh function | set as layer property |
sigmoid | sigmoid function | set as layer property |
softmax | softmax function | set as layer property |
relu | relu function | set as layer property |
leaky_relu | leaky_relu function | set as layer property |
swish | swish function | set as layer property |
gelu | gelu function | set as layer property |
quick_gelu | quick gelu function | set as layer property |
elu | elu function | set as layer property |
selu | selu function | set as layer property |
softplus | softplus function | set as layer property |
mish | mish function | set as layer property |
Tensor is responsible for calculation of a layer. It executes several operations such as addition, division, multiplication, dot production, data averaging and so on. In order to accelerate calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) are implemented for some of the operations. Later, these calculations will be optimized. Currently, we support lazy calculation mode to reduce complexity for copying tensors during calculations.
Keyword | Description |
---|---|
4D Tensor | B, C, H, W |
Add/sub/mul/div | - |
sum, average, argmax | - |
Dot, Transpose | - |
normalization, standardization | - |
save, read | - |
NNTrainer provides
Keyword | Loss Name | Description |
---|---|---|
weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Uniform) |
weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_param & type |
Currently, we provide C APIs for Tizen. C++ APIs are also provided for other platform. Java & C# APIs will be provided soon.
The NNtrainer is an open source project released under the terms of the Apache License version 2.0.
Contributions are welcome! Please see our Contributing Guide for more details.
If you find this NNTrainer project useful or relevant to your research, please consider citing our paper:
@inproceedings{10.1145/3639477.3639716,
author = {Moon, Jijoong and Lee, Hyeonseok and Chu, Jiho and Park, Donghak and Hong, Seungbaek and Seo, Hyungjun and Jeong, Donghyeon and Kong, Sungsik and Ham, Myungjoo},
title = {A New Frontier of AI: On-Device AI Training and Personalization},
year = {2024},
isbn = {9798400705014},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3639477.3639716},
doi = {10.1145/3639477.3639716},
booktitle = {Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice},
pages = {323–333},
numpages = {11},
keywords = {on-device AI, neural network, personalization, training, software framework},
series = {ICSE-SEIP '24}
}