huggingface / parler-tts

Inference and training library for high-quality TTS models.
Apache License 2.0
2.6k stars 265 forks source link

Is there a way to create consistent voices? #11

Open BigArty opened 1 month ago

BigArty commented 1 month ago

I want to make an app that would read long texts in chunks. For this I need to get the same voice for the same speaker prompt. Now I get similar but still not the same voices each generation. Is it possible to somehow fix the voice?

sanchit-gandhi commented 1 month ago

This is a very valid feature request @BigArty. What we're thinking of doing is fine-tuning the model on a single speaker to fix the speaker's voice, and then controlling a subset of features (speed, tone, background noise) through the text prompt. Would this work for you?

baimu1010 commented 1 month ago

+1,I need it too

BigArty commented 1 month ago

@sanchit-gandhi Yes, that would be great! The ability to control the tone and speed is also very convenient.

boveyking commented 1 month ago

really good project. thank you gugys. It is a must to be able use voice_id(or speaker id whatever) to produce consistent voice in order to make it useful in real world.

sladec commented 1 month ago

Nice work guys! The ability to generate speech from the same speaker would be really useful. Accent control will also be a +1

adamfils commented 1 month ago

@sanchit-gandhi It would also be nice to use seeds to get consistent voices.

bkutasi commented 1 month ago

In my opinion this is one most important features of a tts so would love to see this integrated,

sanchit-gandhi commented 4 weeks ago

We have a first single-speaker fine-tuned checkpoint: https://huggingface.co/ylacombe/parler-tts-mini-jenny-30H

Could be useful if you want to have a specific voice, in this case Jenny (she's Irish ☘️). Usage is more or less the same as Parler-TTS v0.1, just specify they keyword “Jenny” in the voice description:

import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer, set_seed
import soundfile as sf

device = "cuda:0" if torch.cuda.is_available() else "cpu"

model = ParlerTTSForConditionalGeneration.from_pretrained("ylacombe/parler-tts-mini-jenny-30H").to(device)
tokenizer = AutoTokenizer.from_pretrained("ylacombe/parler-tts-mini-jenny-30H")

prompt = "Hey, how are you doing today? My name is Jenny, and I'm here to help you with any questions you have."
description = "Jenny speaks at an average pace with an animated delivery in a very confined sounding environment with clear audio quality."

input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)

set_seed(42)
# specify min length to avoid 0-length generations
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids, min_length=10)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
baimu1010 commented 3 weeks ago

Can I control the output of the model in the same way as I set up a seed in a text2image?

Kredithaichen commented 3 weeks ago

Having a checkpoint for a consistent voice sounds great! A bit of a dummy-question on my end, though, @sanchit-gandhi : How can I use this new checkpoint? Where would I need to place the .safetensors file in order to load it with the script you provided? Is there a tutorial or 'getting-started' document out there?

furqan4545 commented 3 weeks ago

Sanchit, thank you so much for the amazing work. You guys are true face of open source contribution. I had the same feature request, which is consistent speaker id and being able to generate long form speeches with consistent voice id. I mean i don't want to break my text into chunks and then feed it to model. It will be great if the model can take of chunking itself and generate long form speech. Plus controlling emotions and speed through text is amazing feature.