This is Unity3d bindings for the whisper.cpp. It provides high-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model running on your local machine.
This repository comes with "ggml-tiny.bin" model weights. This is the smallest and fastest version of whisper model, but it has worse quality comparing to other models. If you want better quality, check out other models weights.
Main features:
Supported platforms:
https://user-images.githubusercontent.com/6161335/231581911-446286fd-833e-40a2-94d0-df2911b22cad.mp4
"whisper-small.bin" model tested in English, German and Russian from microphone
https://user-images.githubusercontent.com/6161335/231584644-c220a647-028a-42df-9e61-5291aca3fba0.mp4
"whisper-tiny.bin" model, 50x faster than realtime on Macbook with M1 Pro
Clone this repository and open it as regular Unity project. It comes with examples and tiny multilanguage model weights.
Alternatively you can add this repository to your project as a Unity Package. Add it by this git URL to your Unity Package Manager:
https://github.com/Macoron/whisper.unity.git?path=/Packages/com.whisper.unity
Unity project compiled with enabled CUDA expects your end-users to have Nvidia GPU and CUDA libraries. Trying to run build without it will result error.
To run inference with CUDA, you would need to have supported GPU and installed CUDA Toolkit (tested with 12.2.0).
After that go to the Project Settings => Whisper => Enable CUDA. This should force package to use library compiled for CUDA.
Whisper.cpp supports Metal only on Apple7 GPUs family or newer (starting from Apple M1 chips). Trying to run on older hardware will fallback to CPU inference.
To activate Metal inference, go to Project Settings => Whisper => Enable Metal. This should force package to use library compiled for Metal.
You can try different Whisper model weights. For example, you can improve English language transcription by using English-only weights or by trying bigger models.
You can download model weights from here. Just put them into your StreamingAssets
folder.
For more information about models differences and formats read whisper.cpp readme and OpenAI readme.
This project comes with prebuild libraries of whisper.cpp for all supported platforms. You can rebuild them from source using Github Actions. To do that make fork of this repo and go into Actions => Build C++ => Run workflow
. After pipeline completed, download compiled libraries in artifacts tab.
In case you want to build libraries on your machine:
.\build_cpp.bat cpu path\to\whisper
sh build_cpp.sh path/to/whisper all path/to/ndk/android.toolchain.cmake
sh build_cpp_linux.sh path/to/whisper cpu
Plugins
folder. Windows will produce only Windows library, Linux will produce only Linux. MacOS will produce MacOS, iOS and Android libraries.
MacOS build script was tested on Mac with ARM processor. For Intel processors you might need change some parameters.
This project is licensed under the MIT License.
It uses compiled libraries and model weighs of whisper.cpp which is under MIT license.
Original OpenAI Whisper code and weights are also under MIT license.