Run transcriptions using the OpenAI Whisper API
Install this plugin in the same environment as LLM.
llm install llm-whisper-api
The plugin adds a new command, llm whisper-api
. Use it like this:
llm whisper-api audio.mp3
The transcribed audio will be output directly to standard output as plain text.
The plugin will use the OpenAI API key you have already configured using:
llm keys set openai
# Paste key here
You can also pass an explicit API key using --key
like this:
llm whisper-api audio.mp3 --key $OPENAI_API_KEY
You can pipe data to the tool if you specify -
as a filename:
curl -s 'https://static.simonwillison.net/static/2024/russian-pelican-in-spanish.mp3' \
| llm whisper-api -
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-whisper-api
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
python -m pytest