nnstreamer / nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.
Apache License 2.0
148 stars 74 forks source link
ai hacktoberfest intelligence learning machine-learning neural-network tensorflow-lite tizen training

NNtrainer

Code Coverage GitHub repo size GitHub issues GitHub pull requests <img alt="Coverity Scan Build Status" src="https://scan.coverity.com/projects/22512/badge.svg"/> DailyBuild OpenSSF Best Practices

NNtrainer is a Software Framework for training Neural Network models on devices.

Overview

NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop a Software Framework to train neural network models on embedded devices which have relatively limited resources. Rather than training whole layers of a network from the scratch, NNtrainer finetunes the neural network model on device with user data for the personalization.

Even if NNtrainer runs on device, it provides full functionalities to train models and also utilizes limited device resources efficiently. NNTrainer is able to train various machine learning algorithms such as k-Nearest Neighbor (k-NN), Neural Networks, Logistic Regression, Reinforcement Learning algorithms, Recurrent network and more. We also provide examples for various tasks such as Few-shot learning, ResNet, VGG, Product Rating and more will be added. All of these were tested on Samsung Galaxy smart phone with Android and PC (Ubuntu 18.04/20.04).

A New Frontier of AI: On-Device AI Training and Personalization , ICSE-SEIP, 2024
NNTrainer: Light-Weight On-Device Training Framework , arXiv, 2022
Open Source On-Device AI SW Platform , Samsung Developer Conference 2023 (Korean)
NNTrainer: Personalize neural networks on devices! , Samsung Developer Conference 2021
NNTrainer: "On-device learning" , Samsung AI Forum 2021

Official Releases

Tizen Ubuntu Android/NDK Build
6.0M2 and later 18.04 9/P
arm armv7l badge Available Ready
arm64 aarch64 badge Available android badge
x64 x64 badge ubuntu badge Ready
x86 x86 badge N/A N/A
Publish Tizen Repo PPA
API C (Official) C/C++ C/C++

Getting Started

Installation

Instructions for installing NNTrainer.

Tutorial

Introductions for creating your own model.

Running Examples

Instructions for preparing NNTrainer for execution

Examples for NNTrainer

NNTrainer examples for a variety of networks

Components

Supported Layers

This component defines layers which consist of a neural network model. Layers have their own properties to be set.

Keyword Layer Class Name Description
conv1d Conv1DLayer Convolution 1-Dimentional Layer
conv2d Conv2DLayer Convolution 2-Dimentional Layer
pooling2d Pooling2DLayer Pooling 2-Dimentional Layer. Support average / max / global average / global max pooling
flatten FlattenLayer Flatten layer
fully_connected FullyConnectedLayer Fully connected layer
input InputLayer Input Layer. This is not always required.
batch_normalization BatchNormalizationLayer Batch normalization layer
layer_normalization LayerNormalizationLayer Layer normalization layer
activation ActivationLayer Set by layer property
addition AdditionLayer Add input input layers
attention AttentionLayer Attenstion layer
centroid_knn CentroidKNN Centroid K-nearest neighbor layer
concat ConcatLayer Concatenate input layers
multiout MultiOutLayer Multi-Output Layer
backbone_nnstreamer NNStreamerLayer Encapsulate NNStreamer layer
backbone_tflite TfLiteLayer Encapsulate tflite as a layer
permute PermuteLayer Permute layer for transpose
preprocess_flip PreprocessFlipLayer Preprocess random flip layer
preprocess_l2norm PreprocessL2NormLayer Preprocess simple l2norm layer to normalize
preprocess_translate PreprocessTranslateLayer Preprocess translate layer
reshape ReshapeLayer Reshape tensor dimension layer
split SplitLayer Split layer
dropout DropOutLayer Dropout Layer
embedding EmbeddingLayer Embedding Layer
positional_encoding PositionalEncodingLayer Positional Encoding Layer
rnn RNNLayer Recurrent Layer
rnncell RNNCellLayer Recurrent Cell Layer
gru GRULayer Gated Recurrent Unit Layer
grucell GRUCellLayer Gated Recurrent Unit Cell Layer
lstm LSTMLayer Long Short-Term Memory Layer
lstmcell LSTMCellLayer Long Short-Term Memory Cell Layer
zoneoutlstmcell ZoneoutLSTMCellLayer Zoneout Long Short-Term Memory Cell Layer
time_dist TimeDistLayer Time distributed Layer
multi_head_attention MultiHeadAttentionLayer Multi Head Attention Layer

Supported Optimizers

NNTrainer Provides

Keyword Optimizer Name Description
sgd Stochastic Gradient Decent -
adam Adaptive Moment Estimation -
Keyword Learning Rate Description
exponential exponential learning rate decay -
constant constant learning rate -
step step learning rate -

Supported Loss Functions

NNTrainer provides

Keyword Class Name Description
cross_sigmoid CrossEntropySigmoidLossLayer Cross entropy sigmoid loss layer
cross_softmax CrossEntropySoftmaxLossLayer Cross entropy softmax loss layer
constant_derivative ConstantDerivativeLossLayer Constant derivative loss layer
mse MSELossLayer Mean square error loss layer
kld KLDLossLayer Kullback-Leibler Divergence loss layer

Supported Activation Functions

NNTrainer provides

Keyword Loss Name Description
tanh tanh function set as layer property
sigmoid sigmoid function set as layer property
softmax softmax function set as layer property
relu relu function set as layer property
leaky_relu leaky_relu function set as layer property
swish swish function set as layer property
gelu gelu function set as layer property
quick_gelu quick gelu function set as layer property
elu elu function set as layer property
selu selu function set as layer property
softplus softplus function set as layer property
mish mish function set as layer property

Tensor

Tensor is responsible for calculation of a layer. It executes several operations such as addition, division, multiplication, dot production, data averaging and so on. In order to accelerate calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) are implemented for some of the operations. Later, these calculations will be optimized. Currently, we support lazy calculation mode to reduce complexity for copying tensors during calculations.

Keyword Description
4D Tensor B, C, H, W
Add/sub/mul/div -
sum, average, argmax -
Dot, Transpose -
normalization, standardization -
save, read -

Others

NNTrainer provides

Keyword Loss Name Description
weight_initializer Weight Initialization Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Uniform)
weight_regularizer weight decay ( L2Norm only ) needs set weight_regularizer_param & type

APIs

Currently, we provide C APIs for Tizen. C++ APIs are also provided for other platform. Java & C# APIs will be provided soon.

Maintainer

Reviewers

Open Source License

The NNtrainer is an open source project released under the terms of the Apache License version 2.0.

Contributing

Contributions are welcome! Please see our Contributing Guide for more details.

Citation

If you find this NNTrainer project useful or relevant to your research, please consider citing our paper:

@inproceedings{10.1145/3639477.3639716,
author = {Moon, Jijoong and Lee, Hyeonseok and Chu, Jiho and Park, Donghak and Hong, Seungbaek and Seo, Hyungjun and Jeong, Donghyeon and Kong, Sungsik and Ham, Myungjoo},
title = {A New Frontier of AI: On-Device AI Training and Personalization},
year = {2024},
isbn = {9798400705014},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3639477.3639716},
doi = {10.1145/3639477.3639716},
booktitle = {Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice},
pages = {323–333},
numpages = {11},
keywords = {on-device AI, neural network, personalization, training, software framework},
series = {ICSE-SEIP '24}
}