Open FurkanGozukara opened 1 year ago
Have a look at this: https://github.com/neonbjb/tortoise-tts/blob/main/tortoise/read.py
I use it this way:
pip install nltk
nltk divides the text from the file into per sentence, which is faster for me, instead of tortoise default method, and my voice does not change too much.
from nltk import tokenize
text_file = 'path/to/text/file'
with open(text_file, 'r', encoding='utf-8') as f:
text = ' '.join([l for l in f.readlines()])
texts = tokenize.sent_tokenize(text)
for j, text in enumerate(texts):
gen = tts.tts_with_preset(text, voice_samples=_voice_samples, enable_redaction=True, conditioning_latents=_conditioning_latents, preset=_preset)
I have the question here : https://github.com/152334H/tortoise-tts-fast/issues/82
Currently this below code is working
python tortoise_tts.py --preset ultra_fast --ar_checkpoint "F:\DL-Art-School\experiments\test1\models\152_gpt.pth" -o "152.wav" "Greetings everyone."
Now I have 2 questions
1st : which parameters i can change to improve quality? just make preset fast?
2nd : can you give me a small python code that will iteratively read a text file and synthesis each line in it with order and save with order without reloading the model with most efficiency?
thank you so much
i can direct this python script into correct venv myself it is easy
also i can't find option to give big text and generate each sentence or text voice. are there separator option? like seperate given text with new line character and generate a new voice for that etc