Samsung / ONE

On-device Neural Engine
Other
411 stars 144 forks source link
compiler neural-network on-device-ai optimization runtime

GitHub release (latest
SemVer) Documentation Status GitHub commit activity Gitter

ONE (On-device Neural Engine)

ONE Logo

A high-performance, on-device neural network inference framework.

Goal

This project ONE aims at providing a high-performance, on-device neural network (NN) inference framework that performs inference of a given NN model on processors, such as CPU, GPU, DSP or NPU.

We develop a runtime that runs on a Linux kernel-based OS platform such as Ubuntu, Tizen, or Android, and a compiler toolchain to support NN models created using various NN training frameworks such as Tensorflow or PyTorch in a unified form at runtime.

Overview

Getting started

Feature Request

You can suggest development of ONE's features that are not yet available.

The functions requested so far can be checked in the popular feature request list.

We expect one of the most frequent feature requests would be the operator kernel implementation. It is good to make a request, but it is better if you contribute by yourself. See the following guide, How to add a new operation, for help.

We are looking forward to your participation. Thank you in advance!

How to Contact