Closed danielw97 closed 9 months ago
Please try with the latest update I just pushed up. It now uses streaming inference and handles longer texts much better (as long as it's still able to chunk into sentences).
If you want to email me (doc@aedo.net) details on the book and chapter you're reliably having problems with I will take a look.
Amazing, after some testing with the latest commits pushed to the main branch it appears as though both of my issues have been fixed. The streaming code now used with xtts I believe has also fixed some inconsistent accent switching I was seeing as well. Thanks again for your work and improvements on this.
That's excellent to hear!
Thanks for using it, and thank you so much for providing feedback, I really appreciate. Hope it keeps working well for you, and definitely open issues if you find problems or have ideas for improvement.
Hello, I'm currently having an issue when using xtts v2. Although it works fine normally, if there is a very long paragraph that exceeds the 400 tokens as judged by the model it crashes and refuses to continue. Not sure how doable this is, but is there any possibility of adding an ability to split paragraphs further to not exceed this length, or is manually editing the files my best bet? Thanks for this project none the less, as it's proving extremely useful.