anira is a high-performance library designed to enable easy real-time safe integration of neural network inference within audio applications. Compatible with multiple inference backends, LibTorch, ONNXRuntime, and Tensorflow Lite, anira bridges the gap between advanced neural network architectures and real-time audio processing. In the paper you can find more information about the architecture and the design decisions of anira, as well as extensive performance evaluations with the built-in benchmarking capabilities.
An extensive anira usage guide can be found here.
The basic usage of anira is as follows:
#include <anira/anira.h>
anira::InferenceConfig inference_config(
{{"path/to/your/model.onnx", anira::InferenceBackend::ONNX}}, // Model path
{{{256, 1, 150}}, {{256, 1}}}, // Input, Output shape
5.33f // Maximum inference time in ms
);
// Create a pre- and post-processor instance
anira::PrePostProcessor pp_processor;
// Create an InferenceHandler instance
anira::InferenceHandler inference_handler(pp_processor, inference_config);
// Pass the host audio configuration and allocate memory for audio processing
inference_handler.prepare({buffer_size, sample_rate});
// Select the inference backend
inference_handler.set_inference_backend(anira::ONNX);
// Optionally get the latency of the inference process in samples
int latency_in_samples = inference_handler.get_latency();
// Real-time safe audio processing in process callback of your application
process(float** audio_data, int num_samples) {
inference_handler.process(audio_data, num_samples);
}
// audio_data now contains the processed audio samples
anira can be easily integrated into your CMake project. Either add anira as a submodule or download the pre-built binaries from the releases page.
# Add anira repo as a submodule
git submodule add https://github.com/anira-project/anira.git modules/anira
In your CMakeLists.txt, add anira as a subdirectory and link your target to the anira library:
# Setup your project and target
project(your_project)
add_executable(your_target main.cpp ...)
# Add anira as a subdirectory
add_subdirectory(modules/anira)
#Link your target to the anira library
target_link_libraries(your_target anira::anira)
Download the pre-built binaries from your operating system and architecture from the releases page.
# Setup your project and target
project(your_project)
add_executable(your_target main.cpp ...)
# Add the path to the anira library as cmake prefix path and find the package
list(APPEND CMAKE_PREFIX_PATH "path/to/anira")
find_package(anira REQUIRED)
# Link your target to the anira library
target_link_libraries(your_target anira::anira)
You can also build anira from source using CMake. All dependencies are automatically installed during the build process.
git clone https://github.com/anira-project/anira
cmake . -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build --config Release --target anira
By default, all three inference engines are installed. You can disable specific backends as needed:
-DANIRA_WITH_LIBTORCH=OFF
-DANIRA_WITH_ONNXRUNTIME=OFF
-DANIRA_WITH_TFLITE=OFF
To allow a controversial approach of controlled blocking in the audio callback to further reduce latency, a flag can be set to allow the use of a semaphore. The semaphore is not 100% real-time safe, but it allows the use of the wait_in_process_block
option in the InferenceConfig
class. We only recommend that you use this option if you are not spawning multiple instances of the InferenceHandler
in serial. By default, we use a real-time safe raw atomic operation.
-DANIRA_WITH_CONTROLLED_BLOCKING=ON
Moreover, the following options are available:
-DANIRA_WITH_BENCHMARK=ON
-DANIRA_WITH_EXAMPLES=ON
-DANIRA_WITH_BELA_EXAMPLE=ON
-DANIRA_WITH_TESTS=ON
For using anira to inference your custom models, check out the extensive usage guide. If you want to use anira for benchmarking, check out the benchmarking guide and the section below. Detailed documentation on anira's API and will be available soon in our upcoming wiki.
anira allows users to benchmark and compare the inference performance of different neural network models, backends, and audio configurations. The benchmarking capabilities can be enabled during the build process by setting the -DANIRA_WITH_BENCHMARK=ON
flag. The benchmarks are implemented using the Google Benchmark and Google Test libraries. Both libraries are automatically linked with the anira library in the build process when benchmarking is enabled. To provide a reproducible and easy-to-use benchmarking environment, anira provides a custom Google benchmark fixture anira::benchmark::ProcessBlockFixture
that is used to define benchmarks. This fixture offers many useful functions for setting up and running benchmarks. For more information on how to use the benchmarking capabilities, check out the benchmarking guide.
anira's real-time safety is checked in this repository with the rtsan sanitizer.
If you use anira in your research or project, please cite either the paper or the software itself:
@inproceedings{ackvaschulz2024anira,
author={Ackva, Valentin and Schulz, Fares},
booktitle={2024 IEEE 5th International Symposium on the Internet of Sounds (IS2)},
title={ANIRA: An Architecture for Neural Network Inference in Real-Time Audio Applications},
year={2024},
volume={},
number={},
pages={1-10},
publisher={IEEE},
doi={10.1109/IS262782.2024.10704099}
}
@software{ackvaschulz2024anira,
author = {Valentin Ackva and Fares Schulz},
title = {anira: an architecture for neural network inference in real-time audio application},
url = {https://github.com/anira-project/anira},
version = {x.x.x},
year = {2024},
}
This project is licensed under Apache-2.0.