High-performance inference of Meta's Encodec deep learning based audio codec model:
Here is a demo of running Encodec on a single M1 MacBook Pro:
https://github.com/PABannier/encodec.cpp/assets/12958149/d11561be-98e9-4504-bba7-86bcc233a499
Here are the steps for the encodec model.
git clone --recurse-submodules https://github.com/PABannier/encodec.cpp.git
cd encodec.cpp
In order to build encodec.cpp you must use CMake
:
mkdir build
cd build
cmake ..
cmake --build . --config Release
Offloading to GPU is possible with the Metal backend for MacOS. Performance are not improved but the power consumption and CPU activity is reduced.
cmake -DGGML_METAL=ON -DBUILD_SHARED_LIBS=Off ..
cmake --build . --config Release
The inference can be offloaded on a CUDA backend with cuBLAS.
cmake -DGGML_CUBLAS=ON -DBUILD_SHARED_LIBS=Off ..
cmake --build . --config Release