QuantiusBenignus / BlahST

Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp offline.
BSD 3-Clause "New" or "Revised" License
20 stars 1 forks source link
accessibility ai bloat-free bloatfree cli command-line command-line-tool desktop-integration gnome kiss llm machine-learning no-nonsense speech-recognition speech-to-text whisper whisper-cpp

BlahST

Blah Speech-to-Text lets you have a bla(h)st inputing text from speech on Linux, with keyboard shortcuts and whisper.cpp. Fire up your microphone and perform high-quality, multilingual speech recognition offline. Extended with local LLMs, it becomes a potent tool to converse with your Linux computer.

BlahST is probably the leanest Whisper-based speech-to-text input tool for Linux, sitting on top of whisper.cpp.

Using low-resource, optimized command-line tools, spoken text input happens very fast. Here is a demonstration video (please, UNMUTE the audio) with some upcoming features (AI translator and assistant, in testing stage):

https://github.com/user-attachments/assets/877c699d-cf8b-4dd2-bc0e-75dee9054cf2

In the above video, the audio starts with the system anouncing the screencasting (my GNOME extension "Voluble" speaks outloud all GNOME desktop notifications), followed by multiple turns of speech input/recognition. Demonstrated at the end is one of the upcomming "AI functions" which uses the text transcribed by BlahST (whisper.cpp), formats it into a LLM prompt and sends it to a local multilingual LLM (llama.cpp or llamafile) which returns the Chinese translation as text and also speaks it using a neural TTS. Orchestrating this from the command line with lean executables leaves the system surprisingly snappy (From the video you can see that the PC barely breaks any sweat - temperatures remain low-ish.)

https://github.com/user-attachments/assets/c3842318-14cb-4874-8651-7bc92abd187f

The above video (unmute please) demonstrates the use of blooper, modified from wsi to transcribe in a loop, until the user terminates speech input with a longer pause (~3sec as preset). With the use of xdotool (or ydotool for Wayland users), text is pasted automatically on any pause (or on hotkey interuption). For the video above, the speech is generated with a synthetic voice and collected by the microphone. This allows me to edit the text concurrently (multitaskers, don't try this at home:). At the end, the top-bar microphone icon should disappear, indicating program exit. It does not happen in the video because the screencast utility has a claim on the icon too.

Principle of Operation (the best UI is no UI at all.)

The idea with BlahST is to be the UI-free software equivalent of a Mongol raid; short and powerfull burst of CPU/GPU action and then it is completely gone, with only textual traces in the clipboard and relative desktop peace. Just use a pair of hotkeys to start and stop recording from the microphone and send the recorded speech to whisper.cpp which dumps transcribed text into the clipboard (unless you pass it by an AI before that). An universal approach that should work in most Linux desktop environments and distributions.

The work is done by the wsi (wsiml for multilingual users) script, similar to the one in the GNOME extension Blurt. Speech recognition is performed by whisper.cpp which must be precompiled on your Linux system or available as a server instance on your LAN or localhost. Alternativelly, you can choose to simply download and use an actually portable executable (with an embedded whisper model) whisperfile, now part of the llamafile repository.

When speech input is initiated with a hotkey, a microphone indicator appears in the top bar (at least in GNOME) and is shown for the duration of the recording (can be interupted with another hotkey). The disappearance of the microphone icon from the top bar indicates completion and the transcribed text can be pasted from the clipboard. On slower systems there may be a slight delay after the microphone icon disappears and before the text reaches the clipboard due to longer transcription time. On my computer, via the whisper.cpp server API, it is less than 150 ms (300 ms with local whisper.cpp) for an average paragraph of spoken text.

For keyboard-only operation, with the standard CTRL+V for example, the standard clipboard will be used under X11 and Wayland (wsi -c or wsiml -c), while wsi (or wsiml) uses the PRIMARY sellection and text is pasted with the middle mouse button). For left-hand paste, speech recording can be relegated to hotkeys triggered with the right hand. For example I have setup the unused "+" and "Insert" keys on the numeric keypad.

DATAFLOW DIAGRAMS #### wsi script ![wsiAI dataflow](https://github.com/user-attachments/assets/12a4a576-5227-4592-82ad-b8618a1cfae7) #### blooper ![blooper dataflow](https://github.com/user-attachments/assets/550d51fc-65f3-4c01-b355-9c6bd0ff2c49)

SYSTEM SETUP

PREREQUISITES:

INSTALLATION

In a folder of your choice, clone the BlahST repository and then choose an installation method from below:

git clone https://github.com/QuantiusBenignus/BlahST.git
USING THE INSTALLATION SCRIPT Run the script `install-wsi` from the folder of the cloned repository and follow the prompts. It will move the script and make it executable, create a link to whisper.cpp `main` executable, set the environment, set a default whisper.cpp model, check for dependencies and request their installation if missing, etc. The script will also download and setup a whisperfile of your choice if you select that option. The installation script also handles setup for network transcription, but the IP and port for the whisper.cpp server must be set manually in `wsi` or `wsiml` Run the script `wsi` or `wsiml` directly from the command line first to verify its proper operation. Later it will be invoked only with [hotkeys](https://github.com/QuantiusBenignus/BlahST/#gui-setup-of-hotkeys) for speed and convenience.
MANUAL INSTALLATION *(Assuming whisper.cpp is installed and the "main" executable compiled with 'make' in the cloned whisper.cpp repo. See Prerequisites section)* * Place the script **wsi** and/or **wsiml** in $HOME/.local/bin/ * Make it executable ``` cd $HOME/.local/bin; chmod +x wsi ``` * Run once from the command line to let the script check for required dependencies * If using local whisper.cpp, create a symbolic link (the code expects 'transcribe' in your $PATH) to the compiled "main" executable in the whisper.cpp directory. For example, create it in your `$HOME/.local/bin/` (part of your $PATH) with ``` ln -s /full/path/to/whisper.cpp/main $HOME/.local/bin/transcribe ``` If transcribe is not in your $PATH, either edit the call to it in **wsi** to include the absolute path, or add its location to the $PATH variable. Otherwise the script will fail. If you prefer not to compile whisper.cpp, or in addition to that, download and set the executable flag of a suitable whisperfile, for example: ``` cd $HOME/.local/bin wget https://huggingface.co/Mozilla/whisperfile/resolve/main/whisper-tiny.en.llamafile chmod +x whisper-tiny.en.llamafile ```

CONFIGURATION

For manual installation only:

Inside the wsi or wsiml script, near the begining, there is a clearly marked section, named "USER CONFIGURATION BLOCK", where all the user-configurable variables (described in the following section) have been collected. Most can be left as is but the important ones are the location of the whisper.cpp model file that you would like to use during transcription (or the IP and port number for the whisper.cpp server). If using a whisperfile, please, set the WHISPERFILE variable to the filename of the previously downloaded whisperfile, i.e. WHISPERFILE=whisper-tiny.en.llamafile

GUI SETUP OF HOTKEYS

To start and stop speech input, for both manual and automatic installation

CASE 1: GNOME ##### Hotkey to start recording of speech * Open your GNOME system settings and find "Keyboard". * Under "Keyboard shortcuts", "View and customize shortcuts" * In the new window, scroll down to "Custom Shortcuts" and press it. * Press "+" to add a new shortcut and give it a name: "Start Recording Speech" * In the "Command" field type `/home/yourusername/.local/bin/wsi` for using the middle mouse button or change it to `.../wsi -c` for using the clipboard. * (For users of the multi-lingual models, replace `wsi` above with `wsiml` and if using a whisperfile, add the `-w` flag, i.e. `/home/yourusername/.local/bin/wsi -c -w` ) * Then press "Set Shortcut" and select a (unused) key combination. For example CTRL+ALT+a * Click Add and you are done. The orchestrator script has a silence detection filter in the call to sox (rec) and would stop recording (in the best case) on 2 seconds of silence. In addition, if one does not want wait or has issues with the silence detection threshold: ##### Manual speech recording interuption For those who want to be able to interupt the recording manually with a key combination, in the spirit of great hacks, we are going to use the system built-in features: * Open your GNOME system settings and again, find "Keyboard". * Under "Keyboard shortcuts", "View and customize shortcuts" * In the new window, scroll down to "Custom Shortcuts" and press it. * Press "+" to add a new shortcut and give it a name: "Interupt Speech Input!" * In the "Command" field type `pkill --signal 2 rec` * Then press "Set Shortcut" and select a (unused) key combination. For example CTRL+ALT+x * Click Add and you are done. That Simple. Just make sure that the new key binding has not been set-up already for something else. Now when the script is recording speech, it can be stopped with the new key combo and transcription will start immediatelly.
CASE 2: XFCE4 This is simalr to the GNOME setup above (for reference, see its more detailed instructions) * Open the Xfce4 Settings Manager. * Navigate to Keyboard → Application Shortcuts. * Click on the Add button to create a new shortcut. * Enter the name of the shortcut and the command e.g. `/home/yourusername/.local/bin/wsi` or `.../wsi -c` for using the clipboard. * (For users of the multi-lingual models, replace `wsi` above with `wsiml` and if using a whisperfile, add the `-w` flag, i.e. `/home/yourusername/.local/bin/wsi -c -w` ) * Press the keys you wish to assign to the shortcut. * Click OK to save the shortcut. The hotkey to stop speech recording should be done similarly with `pkill --signal 2 rec`.
CASE 3: KDE (Plasma) This is similar to the GNOME setup above (for reference, see its more detailed instructions) * Open the System Settings application. * Navigate to Shortcuts and then Custom Shortcuts. * Click on Edit and then New to create a new group for your shortcuts if needed. * Under the newly created group, click on New again and select Global Shortcut -> Command/URL. * Give your new shortcut a name. * Choose the desired shortcut key combination by clicking on the button next to "None" and pressing the keys you want to assign to the shortcut. * In the Trigger tab, specify the command to be executed when the shortcut is triggered. e.g. `/home/yourusername/.local/bin/wsi` or `.../wsi -c` * (For users of the multi-lingual models, replace `wsi` above with `wsiml` and if using a whisperfile, add the `-w` flag, i.e. `/home/yourusername/.local/bin/wsi -c -w` ) * Ensure that the Enabled checkbox is checked to activate the shortcut. * Apply the changes by clicking Apply or OK. The hotkey to stop speech recording should be done similarly with `pkill --signal 2 rec`.

Please, note that there may be slight variations in the above steps depending on the version installed on your system. For many other environements, such as Mate, Cinnamon, LXQt, Deepin, etc. the steps should be somewhat similar to the examples above. Please, consult the documentation for your systems desktop environment.

TO DO

SUMMARY

TIPS AND TRICKS Sox is recording in wav format at 16k rate, the only currently accepted by whisper.cpp. This is done in **wsi** with this command: `rec -t wav $ramf rate 16k silence 1 0.1 3% 1 2.0 6% ` It will attempt to stop on silence of 2s with signal level threshold of 6%. A very noisy environment will prevent the detection of silence and the recording (of noise) will continue. This is a problem and a remedy that may not work in all cases is to adjust the duration and silence threshold in the sox filter in the `wsi` script. Of course, one can use the manual interuption method if preferred. We can't raise the threshold arbitrarily because, if one consistently lowers their voice (fadeout) at the end of speech, it may get cut off if the threshold is high. Lower it in that case to a few %. It is best to try to make the speech distinguishable from noise by amplitude (speak clearly, close to the microphone), while minimizing external noise (sheltered location of the microphone, noise canceling hardware etc.) With good speech signal level, the threshold can then be more effective, since SNR (speech-to-noise ratio:-) is effectively increased. After the speech is captured, it will be passed to `transcribe` (whisper.cpp) for speech recognition. This will happen faster than real time (especially with a fast CPU or if your whisper.cpp installation uses CUDA). One can adjust the number of processing threads used by adding `-t n` to the command line parameters of transcribe (please, see whisper.cpp documentation). The script will then parse the text to remove non-speech artifacts, format it and send it to the PRIMARY selection (clipboard) using either X11 or Wayland tools. In principle, whisper (whisper.cpp) **is multilingual** and with the correct model file, this application will output UTF-8 text transcribed in the correct language. The `wsiml` script is dedicated to multi-lingual use and with it the user is able to choose the language for speech input (using the `-l LC` flag where LC is the language code) and can also translate the speech in the chosen input language to English with the `-t` flag. The user can assign multiple hotkeys to the various languages that they want to transcribe or translate from. For example, two additional hotkeys can be set, one for transcribing and another for translating from French by assigning the commands `wsiml -l fr` and `wsiml -l fr -t` correspondingly. Please, note that when using the server mode, now you have 2 choices. You can have either the precompiled whisper.cpp server or the downloaded whisperfile (in server mode) listen at the preconfigured host and port number. The orchestrator script approaches them the same way. ##### Temporary directory and files Speech-to-text transcription is memory- and CPU-intensive task and fast storage for read and write access can only help. That is why **wsi** stores temporary and resource files in memory, for speed and to reduce SSD/HDD "grinding": `TEMPD='/dev/shm'`. This mount point of type "tmpfs" is created in RAM (let's assume that you have enough, say, at least 8GB) and is made available by the kernel for user-space applications. When the computer is shut down it is automatically wiped out, which is fine since we do not need the intermediate files. In fact, for some types of applications (looking at you Electron), it would be beneficial (IMHO) to have the systemwide /tmp mount point also kept in RAM. Moving /tmp to RAM may speed up application startup a bit. A welcome speedup for any Electron app. In its simplest form, this transition is easy, just run: `echo "tmpfs /tmp tmpfs rw,nosuid,nodev" | sudo tee -a /etc/fstab` and then restart your Linux computer. For the aforementioned reasons, especially if HDD is the main storage media, one can also move the ASR model files needed by whisper.cpp in the same location (/dev/shm). These are large files, that can be transferred to this location at the start of a terminal session (or at system startup). This can be done using your `.profile` file by placing something like this in it: ``` ([ -f /dev/shm/ggml-base.en.bin ] || cp /path/to/your/local/whisper.cpp/models/ggml* /dev/shm/) ``` https://github.com/QuantiusBenignus/cliblurt/assets/120202899/e4cd3e39-6dd3-421b-9550-4c428a5a8f0a

Credits