Open fakerybakery opened 1 month ago
Currently we train on a maximum of 30-second audios. With @ylacombe we're looking at increasing the context length to potentially longer audio lengths. Alibi embeddings (or a variant thereof) look promising for this https://arxiv.org/abs/2108.12409
As a future works, it would be amazing if you could feed an entire chapter of an audiobook to the model, and have it learn the prosody and intonation directly from training examples (with no guidance from the text prompt)
That would be nice. I was wondering if it would be possible to use chunking, and have previous chunks as context, to make the speech sound natural with different speakers. (This would be nice for audiobooks with multiple characters.)
Currently we train on a maximum of 30-second audios. With @ylacombe we're looking at increasing the context length to potentially longer audio lengths. Alibi embeddings (or a variant thereof) look promising for this https://arxiv.org/abs/2108.12409
As a future works, it would be amazing if you could feed an entire chapter of an audiobook to the model, and have it learn the prosody and intonation directly from training examples (with no guidance from the text prompt)
Is there any updates aobut the long-form speech synthesis? I'm looking forward to the results. What's more, for the future works you mentioned, it sounds more applicable in the audiobook scene. But I'm curious about what the voice be like. A pre-defined voice?
Hi, Congrats on the release!! Is long form synthesis planned? Thank you!