HenestrosaDev / audiotext

A desktop application that transcribes audio from files, microphone input or YouTube videos with the option to translate the content and create subtitles.
Other
155 stars 15 forks source link
audio-to-text customtkinter ffmpeg python speech-recognition speech-to-text speech-to-text-api subtitles-generator transcriber video-to-text whisperx
Logo

Audiotext

A desktop application that transcribes audio from files, microphone input or YouTube videos with the option to translate the content and create subtitles.

Code Quality badge status
Version GitHub Contributors License
GitHub Contributors Issues GitHub pull requests

Report Bug · Request Feature · Ask Question

Table of Contents

About the Project

Main

Audiotext transcribes the audio from an audio file, video file, microphone input, directory, or YouTube video into any of the 99 different languages it supports. You can transcribe using the Google Speech-to-Text API, the Whisper API, or WhisperX. The last two methods can even translate the transcription or generate subtitles!

You can also choose the theme you like best. It can be dark, light, or the one configured in the system.

Dark Dark theme
Light Light theme

Supported Languages

Click here to display - Afrikaans - Albanian - Amharic - Arabic - Armenian - Assamese - Azerbaijan - Bashkir - Basque - Belarusian - Bengali - Bosnian - Breton - Bulgarian - Burmese - Catalan - Chinese - Chinese (Yue) - Croatian - Czech - Danish - Dutch - English - Estonian - Faroese - Farsi - Finnish - French - Galician - Georgian - German - Greek - Gujarati - Haitian - Hausa - Hawaiian - Hebrew - Hindi - Hungarian - Icelandic - Indonesian - Italian - Japanese - Javanese - Kannada - Kazakh - Khmer - Korean - Lao - Latin - Latvian - Lingala - Lithuanian - Luxembourgish - Macedonian - Malagasy - Malay - Malayalam - Maltese - Maori - Marathi - Mongolian - Nepali - Norwegian - Norwegian Nynorsk - Occitan - Pashto - Polish - Português - Punjabi - Romanian - Russian - Sanskrit - Serbian - Shona - Sindhi - Sinhala - Slovak - Slovenian - Somali - Spanish - Sundanese - Swahili - Swedish - Tagalog - Tajik - Tamil - Tatar - Telugu - Thai - Tibetan - Turkish - Turkmen - Ukrainian - Urdu - Uzbek - Vietnamese - Welsh - Yiddish - Yoruba

Supported File Types

Audio file formats - `.aac` - `.flac` - `.mp3` - `.mpeg` - `.oga` - `.ogg` - `.opus` - `.wav` - `.wma`
Video file formats - `.3g2` - `.3gp2` - `.3gp` - `.3gpp2` - `.3gpp` - `.asf` - `.avi` - `.f4a` - `.f4b` - `.f4v` - `.flv` - `.m4a` - `.m4b` - `.m4r` - `.m4v` - `.mkv` - `.mov` - `.mp4` - `.ogv` - `.ogx` - `.webm` - `.wmv`

Project Structure

ASCII folder structure ``` │ .gitignore │ audiotext.spec │ LICENSE │ README.md │ requirements.txt │ ├───.github │ │ CONTRIBUTING.md │ │ FUNDING.yml │ │ │ ├───ISSUE_TEMPLATE │ │ bug_report_template.md │ │ feature_request_template.md │ │ │ └───PULL_REQUEST_TEMPLATE │ pull_request_template.md │ ├───docs/ │ ├───res │ ├───img │ │ icon.ico │ │ │ └───locales │ │ main_controller.pot │ │ main_window.pot │ │ │ ├───en │ │ └───LC_MESSAGES │ │ app.mo │ │ app.po │ │ main_controller.po │ │ main_window.po │ │ │ └───es │ └───LC_MESSAGES │ app.mo │ app.po │ main_controller.po │ main_window.po │ └───src │ app.py │ ├───controllers │ __init__.py │ main_controller.py │ ├───handlers │ file_handler.py │ google_api_handler.py │ openai_api_handler.py │ whisperx_handler.py │ youtube_handler.py │ ├───interfaces │ transcribable.py │ ├───models │ │ __init__.py │ │ transcription.py │ │ │ └───config │ __init__.py │ config_subtitles.py │ config_system.py │ config_transcription.py │ config_whisper_api.py │ config_whisperx.py │ ├───utils │ __init__.py │ audio_utils.py │ config_manager.py │ constants.py │ dict_utils.py │ enums.py │ env_keys.py │ path_helper.py │ └───views │ __init__.py │ main_window.py │ └───custom_widgets __init__.py ctk_scrollable_dropdown/ ctk_input_dialog.py ```

Built With

(back to top)

Getting Started

Installation

  1. Install FFmpeg to execute the program. Otherwise, it won't be able to process the audio files.

    To check if you have it installed on your system, run ffmpeg -version. It should return something similar to this:

    ffmpeg version 5.1.2-essentials_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
    built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
    configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
    libavutil      57. 28.100 / 57. 28.100
    libavcodec     59. 37.100 / 59. 37.100
    libavformat    59. 27.100 / 59. 27.100
    libavdevice    59.  7.100 / 59.  7.100
    libavfilter     8. 44.100 /  8. 44.100
    libswscale      6.  7.100 /  6.  7.100
    libswresample   4.  7.100 /  4.  7.100

    If the output is an error, it is because your system cannot find the ffmpeg system variable, which is probably because you don't have it installed on your system. To install ffmpeg, open a command prompt and run one of the following commands, depending on your operating system:

    # on Ubuntu or Debian
    sudo apt update && sudo apt install ffmpeg
    
    # on Arch Linux
    sudo pacman -S ffmpeg
    
    # on MacOS using Homebrew (https://brew.sh/)
    brew install ffmpeg
    
    # on Windows using Chocolatey (https://chocolatey.org/)
    choco install ffmpeg
    
    # on Windows using Scoop (https://scoop.sh/)
    scoop install ffmpeg
  2. Go to releases and download the latest.
  3. Decompress the downloaded file.
  4. Open the audiotext folder and double-click the Audiotext executable file.

Setting Up the Project Locally

  1. Clone the repository by running git clone https://github.com/HenestrosaDev/audiotext.git.
  2. Change the current working directory to audiotext by running cd audiotext.
  3. (Optional but recommended) Create a Python virtual environment in the project root. If you're using virtualenv, you would run virtualenv venv.
  4. (Optional but recommended) Activate the virtual environment:

    # on Windows
    . venv/Scripts/activate
    # if you get the error `FullyQualifiedErrorId : UnauthorizedAccess`, run this:
    Set-ExecutionPolicy Unrestricted -Scope Process
    # and then . venv/Scripts/activate
    
    # on macOS and Linux
    source venv/Scripts/activate
  5. Run pip install -r requirements.txt to install the dependencies.
  6. (Optional) If you intend to contribute to the project, run pip install -r requirements-dev.txt to install the development dependencies.
  7. (Optional) If you followed step 6, run pre-commit install to install the pre-commit hooks in your .git/ directory.
  8. Copy and paste the .env.example file as .env to the root of the directory.
  9. Run python src/app.py to start the program.

Notes

(back to top)

Usage

Once you open the Audiotext executable file (explained in the Getting Started section), you'll see something like this:

Main

Transcription Language

The target language for the transcription. If you use the Whisper API or the WhisperX transcription methods, you can set this to a language other than the one spoken in the audio in order to translate it to the selected language.

For example, to translate an English audio into French, you would set Transcription language to French, as shown in the video below:

https://github.com/user-attachments/assets/e68d9b90-3978-4ffb-9b62-bd3d57a1a33d

This is an unofficial way to perform translations, so be sure to double-check the generated transcription for errors.

Transcription Method

There are three transcription methods available in Audiotext:

Audio Source

You can transcribe from four different audio sources:

Save Transcription

When you click on the Save transcription button, you'll be prompted for a file explorer where you can name the transcription file and select the path where you want to save it. Please note that any text entered or modified in the textbox WILL NOT be included in the saved transcription.

Autosave

Unchecked by default. If checked, the transcription will automatically be saved in the root of the folder where the file to transcribe is stored. If there are already existing files with the same name, they won't be overwritten. To do that, you'll need to check the Overwrite existing files option (see below).

Note that if you create a transcription using the Microphone or YouTube audio sources with the Autosave action enabled, the transcription files will be saved in the root of the audiotext-vX.X.X directory.

Overwrite Existing Files

This option can only be checked if the Autosave option is checked. If Overwrite existing files is checked, existing transcriptions in the root directory of the file to be transcribed will be overwritten when saving.

For example, let's use this directory as a reference:

└───audios
        foo.mp3
        foo.srt
        foo.txt

If we transcribe the audio file foo.mp3 with the output file types .json, .txt and .srt and the Autosave and Overwrite existing files options checked, the files foo.srt and foo.txt will be overwritten and the file foo.json will be created.

On the other hand, if we transcribe the audio file foo.mp3 with the same output file types, with the option Autosave checked but without the option Overwrite existing files, the file foo.json will still be created, but the files foo.srt and foo.txt will remain unchanged.

Google Speech-To-Text API Options

The Google API options frame appears if the selected transcription method is Google API. See the Transcription Method section to know more about the Google API.

google-api-options

#### Google API Key Since the program uses the free **Google API** tier by default, which allows you to transcribe up to 60 minutes of audio per month for free, you may need to add an API key if you want to make extensive use of this feature. To do so, click the `Set API key` button. You'll be presented with a dialog box where you can enter your API key, which will **only** be used to make requests to the API.

Google API key dialog

Remember that **WhisperX** provides fast, unlimited audio transcription that supports translation and subtitle generation for free, unlike the **Google API**. Also note that Google charges for the use of the API key, for which **Audiotext** is not responsible. ### Whisper API Options The `Whisper API options` frame appears if the selected transcription method is **Whisper API**. See the [Transcription Method](#transcription-method) section to know more about the **Whisper API**.

Whisper API options

#### Whisper API Key As noted in the [Transcription Method](#transcription-method) section, an [OpenAI API key]((https://platform.openai.com/api-keys)) is required to use this transcription method. Otherwise, you won't be able to use it. To add it, click the `Set OpenAI API key` button. You'll be presented with a dialog box where you can enter your API key, which will **only** be used to make requests to the API.

OpenAI API key dialog

OpenAI charges for the use of the API key, for which **Audiotext** is not responsible. See the [Troubleshooting](#troubleshooting) section if you get error `429` on your first request with an API key. #### Response Format The format of the transcript output, in one of these options: - `json` - `srt` (subtitle file type) - `text` - `verbose_json` - `vtt` (subtitle file type) Defaults to `text`. #### Temperature The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. Defaults to 0. #### Timestamp Granularities The timestamp granularities to populate for this transcription. `Response format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. **Note**: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. Defaults to `segment`. ### WhisperX Options The **WhisperX** options appear when the selected transcription method is **WhisperX**. You can select the output file types of the transcription and whether to translate the transcription into English.

WhisperX options

#### Output File Types You can select one or more of the following transcription output file types: - `.aud` - `.json` - `.srt` (subtitle file type) - `.tsv` - `.txt` - `.vtt` (subtitle file type) If you select one of the two subtitle file types (`.vtt` and `.srt`), the `Subtitle options` frame will be displayed with more options (read more [here](#subtitle-options)). #### Translate to English To translate the transcription to English, simply check the `Translate to English` checkbox before generating the transcription, as shown in the video below. https://github.com/user-attachments/assets/e614201c-25f2-4ec7-8478-3b63aade0c44 If you want to translate the audio to another language, check the [Transcription Language](#transcription-language) section. ### Subtitle Options When you select the `.srt` and/or the `.vtt` output file type(s), the `Subtitle options` frame will be displayed. Note that the input options only apply to the `.srt` and `.vtt` files:

Subtitle options

To get the subtitle file(s) after the audio is transcribed, you can either check the `Autosave` option before generating the transcription or click `Save transcription` and select the path where you want to save them as explained in the [Save Transcription](#save-transcription) section. #### Highlight Words Underline each word as it's spoken in `.srt` and `.vtt` subtitle files. Not checked by default. #### Max. Line Count The maximum number of lines in a segment. `2` by default. #### Max. Line Width The maximum number of characters in a line before breaking the line. `42` by default. ### Advanced Options When you click the `Show advanced options` button in the `WhisperX options` frame, the `Advanced options` frame appears, as shown in the figure below.

WhisperX advanced options

It's highly recommended that you don't change the default configuration unless you're having problems with **WhisperX** or you know exactly what you're doing, especially the `Compute type` and `Batch size` options. Change them at your own risk and be aware that you may experience problems, such as having to reboot your system if the GPU runs out of VRAM. #### Model Size There are five main ASR (Automatic Speech Recognition) model sizes that offer tradeoffs between speed and accuracy. The larger the model size, the more VRAM it uses and the longer it takes to transcribe. Unfortunately, **WhisperX** hasn't provided specific performance data for each model, so the table below is based on the one detailed in [OpenAI's Whisper README](https://github.com/openai/whisper). According to **WhisperX**, the `large-v2` model requires <8GB of GPU memory and batches inference for 70x real-time transcription (taken from the project's [README](https://github.com/m-bain/whisperX)). | Model | Parameters | Required VRAM | |:--------:|:----------:|:--------------:| | `tiny` | 39 M | ~1 GB | | `base` | 74 M | ~1 GB | | `small` | 244 M | ~2 GB | | `medium` | 769 M | ~5 GB | | `large` | 1550 M | <8 GB | > [!NOTE] >`large` is divided into three versions: `large-v1`, `large-v2`, and `large-v3`. The default model size is `large-v2`, since `large-v3` has some bugs that weren't as common in `large-v2`, such as hallucination and repetition, especially for certain languages like Japanese. There are also more prevalent problems with missing punctuation and capitalization. See the announcements for the [`large-v2`](https://github.com/openai/whisper/discussions/661) and the [`large-v3`](https://github.com/openai/whisper/discussions/1762) models for more insight into their differences and the issues encountered with each. The larger the model size, the lower the WER (Word Error Rate in %). The table below is taken from [this Medium article](https://blog.ml6.eu/fine-tuning-whisper-for-dutch-language-the-crucial-role-of-size-dd5a7012d45f), which analyzes the performance of pre-trained Whisper models on common Dutch speech. | Model | WER | |:--------:|:-----:| | tiny | 50.98 | | small | 17.90 | | large-v2 | 7.81 | #### Compute Type This term refers to different data types used in computing, particularly in the context of numerical representation. It determines how numbers are stored and represented in a computer's memory. The higher the precision, the more resources will be needed and the better the transcription will be. There are three possible values for **Audiotext**: - `int8`: Default if using CPU. It represents whole numbers without any fractional part. Its size is 8 bits (1 byte) and it can represent integer values from -128 to 127 (signed) or 0 to 255 (unsigned). It is used in scenarios where memory efficiency is critical, such as in quantized neural networks or edge devices with limited computational resources. - `float16`: Default if using CUDA GPU. It's a half precision type representing 16-bit floating point numbers. Its size is 16 bits (2 bytes). It has a smaller range and precision compared to `float32`. It's often used in applications where memory is a critical resource, such as in deep learning models running on GPUs or TPUs. - `float32`: Recommended for CUDA GPUs with more than 8 GB of VRAM. It's a single precision type representing 32-bit floating point numbers, which is a standard for representing real numbers in computers. Its size is 32 bits (4 bytes). It can represent a wide range of real numbers with a reasonable level of precision. #### Batch Size This option determines how many samples are processed together before the model parameters are updated. It doesn't affect the quality of the transcription, only the generation speed (the smaller, the slower). For simplicity, let's divide the possible batch size values into two groups: - **Small batch size (08)**: Speeds up in training, especially on hardware optimized for parallel processing such as GPUs. Max. recommended to `16`. #### Use CPU **WhisperX** will use the CPU for transcription if checked. Checked by default if there is no CUDA GPU. As noted in the [Compute Type](#compute-type) section, the default compute type value for the CPU is `int8`, since many CPUs don't support efficient `float16` or `float32` computation, which would result in an error. Change it at your own risk. ## Troubleshooting ### The program is unresponsive when using WhisperX The first transcription created by **WhisperX** will take longer than subsequent ones. That's because **Audiotext** needs to load the model, which can take a few minutes, depending on the hardware the program is running on. It may appear to be unresponsive, but do not close it, as it will eventually return to a normal state. Once the model is loaded, you'll notice a dramatic increase in the speed of subsequent transcriptions using this method. ### I get the error `RuntimeError: CUDA Out of memory` when using WhisperX Try any of the following (2 and 3 can affect quality) (taken from [WhisperX README](https://github.com/m-bain/whisperX#technical-details-%EF%B8%8F)): 1. Reduce batch size, e.g. `4` 2. Use a smaller ASR model, e.g. `base` 3. Use lighter compute type, e.g. `int8` ### Is it possible to use less GPU/CPU memory requirements when using WhisperX? You can follow the steps above. See the [Model Size](#model-size) section for how much memory you need for each model. ### The program takes _too_ much time to generate a transcription Try using a smaller ASR model and/or a lighter computation type, as indicated in the point above. Keep in mind that the first **WhisperX** transcription will take some time to load the model. Also remember that the transcription process depends heavily on your system's hardware, so don't expect instant results on modest CPUs. Alternatively, you can use the **Whisper API** or **Google API** transcription methods, which are much less hardware intensive than **WhisperX** because the transcriptions are generated remotely, but you'll be dependent on the speed of your Internet connection. ### When I try to generate a transcription using the Whisper API method, I get the error `429` You'll be prompted with an error like this: ``` RateLimitError("Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}") ``` This is either because your account run out of credits or because you need to fund your account before you can use the API for the first time (even if you have free credits available). To fix this, you need to purchase credits for your account (starting at $5) with a credit or debit card by going to the [Billing](https://platform.openai.com/settings/organization/billing/overview) section of your OpenAI account settings. After funds are added to your account, it may take up to 10 minutes for your account to become active. If you are using an API key that was created before you funded your account for the first time, and the error still persists after about 10 minutes, you'll need to create a new API key and change it in **Audiotext** (see the [Whisper API Key](#whisper-api-key) section to change it).

(back to top)

## Roadmap See the [project backlog](https://github.com/users/HenestrosaDev/projects/1). You can propose a new feature by creating a [discussion](https://github.com/HenestrosaDev/audiotext/discussions/new?category=ideas)! ## Authors - HenestrosaDev (José Carlos López Henestrosa) See also the list of [contributors](https://github.com/HenestrosaDev/audiotext/contributors) who participated in this project. ## Contributing Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**. Please read the [CONTRIBUTING.md](https://github.com/HenestrosaDev/audiotext/blob/main/.github/CONTRIBUTING.md) file, where you can find more detailed information about how to contribute to the project. ## Acknowledgments I used the following resources to create this project: - [Extracting speech from video using Python](https://towardsdatascience.com/extracting-speech-from-video-using-python-f0ec7e312d38) - [How to translate Python applications with the GNU gettext module](https://phrase.com/blog/posts/translate-python-gnu-gettext/) - [Speech recognition on large audio files](https://www.geeksforgeeks.org/python-speech-recognition-on-large-audio-files/) ## License Distributed under the BSD-4-Clause license. See [`LICENSE`](https://github.com/HenestrosaDev/audiotext/blob/main/LICENSE) for more information. ## Support Would you like to support the project? That's very kind of you! However, I would suggest that you to consider supporting the packages that I've used to build this project first. If you still want to support this particular project, you can go to my Ko-Fi profile by clicking on the button down below! [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/henestrosadev)

(back to top)