Unifies access to multiple open source text to speech systems and voices for many languages.
Supports a subset of SSML that can use multiple voices, text to speech systems, and languages!
<speak>
The 1st thing to remember is that 27 languages are supported in Open TTS as of 10/13/2021 at 3pm.
<voice name="glow-speak:en-us_mary_ann">
<s>
The current voice can be changed, even to a different text to speech system!
</s>
</voice>
<voice name="coqui-tts:en_vctk#p228">
<s>Breaks are possible</s>
<break time="0.5s" />
<s>between sentences.</s>
</voice>
<s lang="en">
One language is never enough
</s>
<s lang="de">
Eine Sprache ist niemals genug
</s>
<s lang="ja">
言語を一つは決して足りない
</s>
<s lang="sw">
Lugha moja haitoshi
</s>
</speak>
See the full SSML example (use synesthesiam/opentts:all
Docker image with all voices included)
Basic OpenTTS server:
$ docker run -it -p 5500:5500 synesthesiam/opentts:<LANGUAGE>
where <LANGUAGE>
is one of:
Visit http://localhost:5500
For HTTP API test page, visit http://localhost:5500/openapi/
Exclude eSpeak (robotic voices):
$ docker run -it -p 5500:5500 synesthesiam/opentts:<LANGUAGE> --no-espeak
You can have the OpenTTS server cache WAV files with --cache
:
$ docker run -it -p 5500:5500 synesthesiam/opentts:<LANGUAGE> --cache
This will store WAV files in a temporary directory (inside the Docker container). A specific directory can also be used:
$ docker run -it -v /path/to/cache:/cache -p 5500:5500 synesthesiam/opentts:<LANGUAGE> --cache /cache
See swagger.yaml
GET /api/tts
?voice
- voice in the form tts:voice
(e.g., espeak:en
)?text
- text to speak?cache
- disable WAV cache with false
audio/wav
bytesGET /api/voices
tts:voice
id
- voice identifier for TTS system (string)name
- friendly name of voice (string)gender
- M or F (string)language
- 2-character language code (e.g., "en")locale
- lower-case locale code (e.g., "en-gb")tts_name
- name of text to speech system?tts_name
- only text to speech system(s)?language
- only language(s)?locale
- only locale(s)?gender
- only gender(s)GET /api/languages
?tts_name
- only text to speech system(s)A subset of SSML is supported:
<speak>
- wrap around SSML text
lang
- set language for document<s>
- sentence (disables automatic sentence breaking)
lang
- set language for sentence<w>
/ <token>
- word (disables automatic tokenization)<voice name="...">
- set voice of inner text
voice
- name or language of voice
tts:voice
(e.g., "glow-speak:en-us_mary_ann") or tts:voice#speaker_id
(e.g., "coqui-tts:en_vctk#p228")--preferred-voice <lang> <voice>
)<say-as interpret-as="">
- force interpretation of inner text
interpret-as
one of "spell-out", "date", "number", "time", or "currency"format
- way to format text depending on interpret-as
<break time="">
- Pause for given amount of time
<sub alias="">
- substitute alias
for inner textUse OpenTTS as a drop-in replacement for MaryTTS.
The voice format is <TTS_SYSTEM>:<VOICE_NAME>
. Visit the OpenTTS web UI and copy/paste the "voice id" of your favorite voice here.
You may need to change the port in your docker run
command to -p 59125:5500
for compatibility with existing software.
On the Raspberry Pi, you may need to lower the quality of Larynx voices to get reasonable response times.
This is done by appending the quality level to the end of your voice:
tts:
- platform: marytts
voice:larynx:harvard;low
Available quality levels are high
(the default), medium
, and low
.
Note that this only applies to Larynx and Glow-Speak voices.
For multi-speaker models (currently just coqui-tts:en_vctk
), you can append a speaker name or id to your voice:
tts:
- platform: marytts
voice:coqui-tts:en_vctk#p228
You can get the available speaker names from /api/voices
or provide a 0-based index instead:
tts:
- platform: marytts
voice:coqui-tts:en_vctk#42
Default settings for Larynx can be provided on the command-line:
--larynx-quality
- vocoder quality ("high", "medium", or "low", default: "high")--larynx-noise-scale
- voice volatility (0-1, default: 0.667)--larynx-length-scale
- voice speed (< 1 is faster, default: 1.0)OpenTTS uses Docker buildx to build multi-platform images based on Debian bullseye.
Before building, make sure to download the voices you want to the voices
directory. Each TTS system that uses external voices has a sub-directory with instructions on how to download voices.
If you only plan to build an image for your current platform, you should be able to run:
make <lang>
from the root of the cloned repository, where <lang>
is one of the supported languages. If it builds successfully, you can run it with:
make <lang>-run
For example, the English image can be built and run with:
make en
make en-run
Under the hood, this does two things:
configure
script with --languages <lang>
docker buildx build
with the appropriate argumentsYou can manually run the configure
script -- see ./configure --help
for more options. This script generates the following files (used by the build process):
apt-get
during the build onlyapt-get
for runtimepip
docker buildx build
To build an image for a different platform, you need to initialize a docker buildx builder:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker buildx create --config /etc/docker/buildx.conf --use --name mybuilder
docker buildx use mybuilder
docker buildx inspect --bootstrap
NOTE: For some reason, you have to do these steps each time you reboot. If you see errors like "Error while loading /usr/sbin/dpkg-split: No such file or directory", run docker buildx rm mybuilder
and re-run the steps above.
When you run make
, specify the platform(s) you want to build for:
DOCKER_PLATFORMS='--platform linux/amd64,linux/arm64,linux/arm/v7' make <lang>
You may place pre-compiled Python wheels in the download
directory. They will be used during the installation of Python packages.