Closed JonSingleton closed 1 month ago
Hey, just want to say that this would be a really big improvement, I find StyleTTS2 really useful for my purposes and the biggest downside is the length it could generate. The ability to change the output style from the reference file is so good.
I've been using the implementation in my pull request for ~2 days and haven't noticed any issues thus far. It's a bandaid fix and doesn't make it as if style tts is taking into account the full prompt as an overall input, but if the model file is well trained then it should come out consistently and you won't notice the concatenation of the files.
PR added
EDIT I created a pull request to implement this a little bit cleaner (or at the very least Jarod can rework it and implement the way he sees best).
So this code is modified from the NeuralVox readme for longer prompts and utilizes tortoiseTTS's split_and_recombine_text function. I sloppily copied the tortoise site-package from my copy of your ai voice cloning v3 repo into the styletts2 webui site packages venv (I didn't install it, didn't wanna deal with dependency conflicts, and am only using it for the split_and_recombine_text function) and worked it into your generate_audio function in webui.py. Ugly code below. direct replacement for the generate_audio function.
Someone who is better with python, please do a better/prettier job of implementing this (or just implementing the specific split function mentioned into your code directly?) and do a pull request (or Jarod himself, of course).
That said, I've tested, and was able to generate 3 minutes of audio in 16 seconds - that's just the longest I tried, I couldn't tell you what (if any) limit there would really be.
Feel free to move this to discussions if you think it belongs there instead.
`from tortoise.utils.text import split_and_recombine_text import numpy as np from scipy.io.wavfile import write