Official Python SDK for Deepgram. Power your apps with world-class speech and Language AI models.
You can learn more about the Deepgram API at developers.deepgram.com.
🔑 To access the Deepgram API you will need a free Deepgram API Key.
Python (version ^3.10)
To install the latest version available (which will guarantee change over time):
pip install deepgram-sdk
If you are going to write an application to consume this SDK, it's highly recommended and a programming staple to pin to at least a major version of an SDK (ie ==2.*
) or with due diligence, to a minor and/or specific version (ie ==2.1.*
or ==2.12.0
, respectively). If you are unfamiliar with semantic versioning or semver, it's a must-read.
In a requirements.txt
file, pinning to a major (or minor) version, like if you want to stick to using the SDK v2.12.0
release, that can be done like this:
deepgram-sdk==2.*
Or using pip:
pip install deepgram-sdk==2.*
Pinning to a specific version can be done like this in a requirements.txt
file:
deepgram-sdk==2.12.0
Or using pip:
pip install deepgram-sdk==2.12.0
We guarantee that major interfaces will not break in a given major semver (ie 2.*
release). However, all bets are off moving from a 2.*
to 3.*
major release. This follows standard semver best-practices.
This SDK aims to reduce complexity and abtract/hide some internal Deepgram details that clients shouldn't need to know about. However you can still tweak options and settings if you need.
You can find a walkthrough on our documentation site. Transcribing Pre-Recorded Audio can be done using the following sample code:
AUDIO_URL = {
"url": "https://static.deepgram.com/examples/Bueller-Life-moves-pretty-fast.wav"
}
## STEP 1 Create a Deepgram client using the API key from environment variables
deepgram: DeepgramClient = DeepgramClient("", ClientOptionsFromEnv())
## STEP 2 Call the transcribe_url method on the prerecorded class
options: PrerecordedOptions = PrerecordedOptions(
model="nova-2",
smart_format=True,
)
response = deepgram.listen.rest.v("1").transcribe_url(AUDIO_URL, options)
print(f"response: {response}\n\n")
You can find a walkthrough on our documentation site. Transcribing Live Audio can be done using the following sample code:
deepgram: DeepgramClient = DeepgramClient()
dg_connection = deepgram.listen.websocket.v("1")
def on_open(self, open, **kwargs):
print(f"\n\n{open}\n\n")
def on_message(self, result, **kwargs):
sentence = result.channel.alternatives[0].transcript
if len(sentence) == 0:
return
print(f"speaker: {sentence}")
def on_metadata(self, metadata, **kwargs):
print(f"\n\n{metadata}\n\n")
def on_speech_started(self, speech_started, **kwargs):
print(f"\n\n{speech_started}\n\n")
def on_utterance_end(self, utterance_end, **kwargs):
print(f"\n\n{utterance_end}\n\n")
def on_error(self, error, **kwargs):
print(f"\n\n{error}\n\n")
def on_close(self, close, **kwargs):
print(f"\n\n{close}\n\n")
dg_connection.on(LiveTranscriptionEvents.Open, on_open)
dg_connection.on(LiveTranscriptionEvents.Transcript, on_message)
dg_connection.on(LiveTranscriptionEvents.Metadata, on_metadata)
dg_connection.on(LiveTranscriptionEvents.SpeechStarted, on_speech_started)
dg_connection.on(LiveTranscriptionEvents.UtteranceEnd, on_utterance_end)
dg_connection.on(LiveTranscriptionEvents.Error, on_error)
dg_connection.on(LiveTranscriptionEvents.Close, on_close)
options: LiveOptions = LiveOptions(
model="nova-2",
punctuate=True,
language="en-US",
encoding="linear16",
channels=1,
sample_rate=16000,
## To get UtteranceEnd, the following must be set:
interim_results=True,
utterance_end_ms="1000",
vad_events=True,
)
dg_connection.start(options)
## create microphone
microphone = Microphone(dg_connection.send)
## start microphone
microphone.start()
## wait until finished
input("Press Enter to stop recording...\n\n")
## Wait for the microphone to close
microphone.finish()
## Indicate that we've finished
dg_connection.finish()
print("Finished")
There are examples for every API call in this SDK. You can find all of these examples in the examples folder at the root of this repo.
Before running any of these examples, then you need to take a look at the README and install the following dependencies:
pip install -r examples/requirements-examples.txt
Text to Speech:
Analyze Text:
PreRecorded Audio:
Live Audio Transcription:
Management API exercise the full CRUD operations for:
To run each example set the DEEPGRAM_API_KEY
as an environment variable, then cd
into each example folder and execute the example: go run main.py
.
This SDK provides logging as a means to troubleshoot and debug issues encountered. By default, this SDK will enable Information
level messages and higher (ie Warning
, Error
, etc) when you initialize the library as follows:
deepgram: DeepgramClient = DeepgramClient()
To increase the logging output/verbosity for debug or troubleshooting purposes, you can set the DEBUG
level but using this code:
config: DeepgramClientOptions = DeepgramClientOptions(
verbose=logging.DEBUG,
)
deepgram: DeepgramClient = DeepgramClient("", config)
Older SDK versions will receive Priority 1 (P1) bug support only. Security issues, both in our code and dependencies, are promptly addressed. Significant bugs without clear workarounds are also given priority attention.
Interested in contributing? We ❤️ pull requests!
To make sure our community is safe for all, be sure to review and agree to our Code of Conduct. Then see the Contribution guidelines for more information.
In order to develop new features for the SDK itself, you first need to uninstall any previous installation of the deepgram-sdk
and then install/pip the dependencies contained in the requirements.txt
then instruct python (via pip) to use the SDK by installing it locally.
From the root of the repo, that would entail:
pip uninstall deepgram-sdk
pip install -r requirements.txt
pip install -e .
If you are looking to use, run, contribute or modify to the daily/unit tests, then you need to install the following dependencies:
pip install -r requirements-dev.txt
The daily tests invoke a series of checks against the actual/real API endpoint and save the results in the tests/response_data
folder. This response data is updated nightly to reflect the latest response from the server. Running the daily tests does require a DEEPGRAM_API_KEY
set in your environment variables.
To run the Daily Tests:
make daily-test
The unit tests invoke a series of checks against mock endpoints using the responses saved in tests/response_data
from the daily tests. These tests are meant to simulate running against the endpoint without actually reaching out to the endpoint; running the unit tests does require a DEEPGRAM_API_KEY
set in your environment variables, but you will not actually reach out to the server.
make unit-test
We love to hear from you so if you have questions, comments or find a bug in the project, let us know! You can either: