--sampler dpm++2m
is now fixed, and actually uses dpm++2m. see here for more discussion--kv_cache
is now fixed, and produces outputs identical to the original tortoise repo. It is also enabled by default now because of this.--ar-checkpoint
!click me to skip to installation && usage!
This is a working project to drastically boost the performance of TorToiSe, without modifying the base models. Expect speedups of 5~10x, and hopefully 20x or larger when this project is complete.
This repo adds the following config options for TorToiSe for faster inference:
--kv_cache
) enabling of KV cache for MUCH faster GPT sampling--half
) half precision inference where possible--sampler dpm++2m
) DPM-Solver samplers for better diffusion--low_vram
) option to toggle cpu offloading, for high vram usersAll changes in this fork are licensed under the AGPL. For avoidance beyond all doubt, the following statement is added as a comment to all changed code files:
AGPL: a notification must be added stating that changes have been made to that file.
All results listed were generated with a slightly undervolted RTX 3090 on Ubuntu 22.04, with the following base command:
./script/tortoise-tts.py --voice emma --seed 42 --text "$TEXT"
voicefixer
applied.Original TorToiSe repo: | speed (B) | speed (A) | preset | sample |
---|---|---|---|---|
112.81s | 14.94s | ultra_fast | here |
New repo, with --preset ultra_fast : |
speed (B) | speed (A) | GPT kv-cache | sampler | steps | cond-free diffusion | autocast to fp16 | samples (vs orig repo) |
---|---|---|---|---|---|---|---|---|
118.61 | 11.20 | ❌ | DDIM | 30 | ❌ | ❌ | identical | |
9.98 | 4.17 | ✅ | DDIM | 30 | ❌ | ❌ | identical | |
14.32 | 5.58 | ✅ | DPM++2M | 30 | ✅ | ❌ | best | |
7.51 | 3.26 | ✅ | DDIM | 10 | ✅ | ❌ | ~identical | |
7.12 | 3.30 | ✅ | DDIM | 10 | ✅ | ✅ | okayish | |
7.21 | 3.27 | ✅ | DDIM | 10 | ❌ | ✅ | bad |
Results measure the time taken to run tts.tts_with_preset(...)
using the CLI.
The example texts used were:
A (70 characters)
I'm looking for contributors who can do optimizations better than me.
B (188 characters)
Then took the other, as just as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Though as for that the passing there
Had worn them really about the same,
Half precision currently significantly worsens outputs, so I do not recommend enabling it unless you are happy with the samples linked. Using cond_free
with half precision seems to produce decent outputs.
There are two methods for installation.
The installation process is identical to the original tortoise-tts repo.
git clone https://github.com/152334H/tortoise-tts-fast
cd tortoise-tts-fast
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
python3 -m pip install -e .
pip3 install git+https://github.com/152334H/BigVGAN.git
Note that if you have the original tortoise installed,
pip uninstall tortoise
)pip install -r requirements.txt
)pip install -e .
), as this repository will be updated frequentlyFirst, install Poetry. Then, run:
poetry install
poetry shell
If you are experiencing errors related to GPU usage (or lackthereof), please see the instructions on the pytorch website to install pytorch with proper GPU support.
For maximum speed (and worst quality), you can try:
./script/tortoise-tts.py --half --no_cond_free --preset ultra_fast #...
# or, to only generate 1 sample:
./script/tortoise-tts.py --half --no_cond_free --preset single_sample --candidates 1 #...
But in most cases, these settings should perform decently && fast:
./script/tortoise-tts.py --preset ultra_fast # ...
For better quality, you might want the very_fast
preset:
./script/tortoise-tts.py --preset very_fast # ...
You can obtain outputs 100% identical to the original tortoise repo with the following command:
./script/tortoise-tts.py --preset ultra_fast_old --original_tortoise #...
If you want to load a fine-tuned autoregressive model, use the --ar-checkpoint
argument:
./script/tortoise-tts.py --preset very_fast --ar-checkpoint /path/to/checkpoint.pth #...
An experimental Streamlit web UI is now available. To access, run:
$ streamlit run script/app.py
Optimization related:
transformers
model definition (see GPT2InferenceModel
)QoL related:
As stated by an 11Labs developer:
Original README description:
Tortoise is a text-to-speech program built with the following priorities:
This repo contains all the code needed to run Tortoise TTS in inference mode.
A (very) rough draft of the Tortoise paper is now available in doc format. I would definitely appreciate any comments, suggestions or reviews: https://docs.google.com/document/d/13O_eyY65i6AkNrN_LdPhpUjGhyTNKYHvDrIvHnHe1GA
I'm naming my speech-related repos after Mojave desert flora and fauna. Tortoise is a bit tongue in cheek: this model is insanely slow. It leverages both an autoregressive decoder and a diffusion decoder; both known for their low sampling rates. On a K80, expect to generate a medium sized sentence every 2 minutes.
See this page for a large list of example outputs.
Cool application of Tortoise+GPT-3 (not by me): https://twitter.com/lexman_ai
Colab is the easiest way to try this out. I've put together a notebook you can use here: https://colab.research.google.com/github/152334H/tortoise-tts-fast/blob/main/tortoise_tts.ipynb
If you want to use this on your own computer, you must have an NVIDIA GPU.
First, install pytorch using these instructions: https://pytorch.org/get-started/locally/. On Windows, I highly recommend using the Conda installation path. I have been told that if you do not do this, you will spend a lot of time chasing dependency problems.
Next, install TorToiSe and it's dependencies:
git clone https://github.com/neonbjb/tortoise-tts.git
cd tortoise-tts
python -m pip install -r ./requirements.txt
python setup.py install
If you are on windows, you will also need to install pysoundfile: conda install -c conda-forge pysoundfile
This script allows you to speak a single phrase with one or more voices.
./script/tortoise-tts.py --text "I'm going to speak this" --voice random --preset fast
For reading large amounts of text:
./script/tortoise-tts.py --voice random --preset fast < textfile.txt
This will break up the textfile into sentences, and then convert them to speech one at a time. It will output a series of spoken clips as they are generated. Once all the clips are generated, it will combine them into a single file and output that as well.
Sometimes Tortoise screws up an output. You can re-generate any bad clips by re-running read.py
with the --regenerate
argument.
Tortoise can be used programmatically, like so:
reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths]
tts = api.TextToSpeech()
pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')
Tortoise was specifically trained to be a multi-speaker model. It accomplishes this by consulting reference clips.
These reference clips are recordings of a speaker that you provide to guide speech generation. These clips are used to determine many properties of the output, such as the pitch and tone of the voice, speaking speed, and even speaking defects like a lisp or stuttering. The reference clip is also used to determine non-voice related aspects of the audio output like volume, background noise, recording quality and reverb.
I've included a feature which randomly generates a voice. These voices don't actually exist and will be random every time you run it. The results are quite fascinating and I recommend you play around with it!
You can use the random voice by passing in 'random' as the voice name. Tortoise will take care of the rest.
For the those in the ML space: this is created by projecting a random vector onto the voice conditioning latent space.
This repo comes with several pre-packaged voices. Voices prepended with "train_" came from the training set and perform far better than the others. If your goal is high quality speech, I recommend you pick one of them. If you want to see what Tortoise can do for zero-shot mimicking, take a look at the others.
To add new voices to Tortoise, you will need to do the following:
As mentioned above, your reference clips have a profound impact on the output of Tortoise. Following are some tips for picking good clips:
Tortoise is primarily an autoregressive decoder model combined with a diffusion model. Both of these have a lot of knobs that can be turned that I've abstracted away for the sake of ease of use. I did this by generating thousands of clips using various permutations of the settings and using a metric for voice realism and intelligibility to measure their effects. I've set the defaults to the best overall settings I was able to find. For specific use-cases, it might be effective to play with these settings (and it's very likely that I missed something!)
These settings are not available in the normal scripts packaged with Tortoise. They are available, however, in the API. See
api.tts
for a full list.
Some people have discovered that it is possible to do prompt engineering with Tortoise! For example, you can evoke emotion by including things like "I am really sad," before your text. I've built an automated redaction system that you can use to take advantage of this. It works by attempting to redact any text in the prompt surrounded by brackets. For example, the prompt "[I am really sad,] Please feed me." will only speak the words "Please feed me" (with a sad tonality).
Tortoise ingests reference clips by feeding them through individually through a small submodel that produces a point latent, then taking the mean of all of the produced latents. The experimentation I have done has indicated that these point latents are quite expressive, affecting everything from tone to speaking rate to speech abnormalities.
This lends itself to some neat tricks. For example, you can combine feed two different voices to tortoise and it will output what it thinks the "average" of those two voices sounds like.
Use the script get_conditioning_latents.py
to extract conditioning latents for a voice you have installed. This script
will dump the latents to a .pth pickle file. The file will contain a single tuple, (autoregressive_latent, diffusion_latent).
Alternatively, use the api.TextToSpeech.get_conditioning_latents() to fetch the latents.
After you've played with them, you can use them to generate speech by creating a subdirectory in voices/ with a single ".pth" file containing the pickled conditioning latents as a tuple (autoregressive_latent, diffusion_latent).
Probabilistic models like Tortoise are best thought of as an "augmented search" - in this case, through the space of possible utterances of a specific string of text. The impact of community involvement in perusing these spaces (such as is being done with GPT-3 or CLIP) has really surprised me. If you find something neat that you can do with Tortoise that isn't documented here, please report it to me! I would be glad to publish it to this page.
Out of concerns that this model might be misused, I've built a classifier that tells the likelihood that an audio clip came from Tortoise.
This classifier can be run on any computer, usage is as follows:
python tortoise/is_this_from_tortoise.py --clip=<path_to_suspicious_audio_file>
This model has 100% accuracy on the contents of the results/ and voices/ folders in this repo. Still, treat this classifier as a "strong signal". Classifiers can be fooled and it is likewise not impossible for this classifier to exhibit false positives.
Tortoise TTS is inspired by OpenAI's DALLE, applied to speech data and using a better decoder. It is made up of 5 separate models that work together. I've assembled a write-up of the system architecture here: https://nonint.com/2022/04/25/tortoise-architectural-design-doc/
These models were trained on my "homelab" server with 8 RTX 3090s over the course of several months. They were trained on a dataset consisting of ~50k hours of speech data, most of which was transcribed by ocotillo. Training was done on my own DLAS trainer.
I currently do not have plans to release the training configurations or methodology. See the next section..
Tortoise v2 works considerably better than I had planned. When I began hearing some of the outputs of the last few versions, I began wondering whether or not I had an ethically unsound project on my hands. The ways in which a voice-cloning text-to-speech system could be misused are many. It doesn't take much creativity to think up how.
After some thought, I have decided to go forward with releasing this. Following are the reasons for this choice:
tortoise-detect
above.The diversity expressed by ML models is strongly tied to the datasets they were trained on.
Tortoise was trained primarily on a dataset consisting of audiobooks. I made no effort to balance diversity in this dataset. For this reason, Tortoise will be particularly poor at generating the voices of minorities or of people who speak with strong accents.
Tortoise v2 is about as good as I think I can do in the TTS world with the resources I have access to. A phenomenon that happens when training very large models is that as parameter count increases, the communication bandwidth needed to support distributed training of the model increases multiplicatively. On enterprise-grade hardware, this is not an issue: GPUs are attached together with exceptionally wide buses that can accommodate this bandwidth. I cannot afford enterprise hardware, though, so I am stuck.
I want to mention here that I think Tortoise could do be a lot better. The three major components of Tortoise are either vanilla Transformer Encoder stacks or Decoder stacks. Both of these types of models have a rich experimental history with scaling in the NLP realm. I see no reason to believe that the same is not true of TTS.
The largest model in Tortoise v2 is considerably smaller than GPT-2 large. It is 20x smaller that the original DALLE transformer. Imagine what a TTS model trained at or near GPT-3 or DALLE scale could achieve.
If you are an ethical organization with computational resources to spare interested in seeing what this model could do if properly scaled out, please reach out to me! I would love to collaborate on this.
This project has garnered more praise than I expected. I am standing on the shoulders of giants, though, and I want to credit a few of the amazing folks in the community that have helped make this happen:
Tortoise was built entirely by me using my own hardware. My employer was not involved in any facet of Tortoise's development.
If you use this repo or the ideas therein for your research, please cite it! A bibtex entree can be found in the right pane on GitHub.