Open yangdongchao opened 7 months ago
Right now it was trained on libritts-r, which is quite low like 1k hours at most. I am in the process of preparing 3tb dataset that would be used for training for next iteration.
Steve Korshakov
Sent via Superhuman @.***>
On Tue, Apr 2 2024 at 12:07 AM, Dongchao Yang @.**@.>> wrote:
Hi, it is a great work. I want to ask, how many hours data you used to get the performance in your demo.
— Reply to this email directly, view it on GitHubhttps://github.com/ex3ndr/supervoice/issues/9, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AADB2E74SEWRMMCLJHCDJ6DY3JKMHAVCNFSM6AAAAABFSZSR4SVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIYTSNZSG43DMNA. You are receiving this because you are subscribed to this thread.Message ID: @.***>
This model produces good voice quality and prosody for such a small amount of data if we train this model on a good amount of multi-lingual dataset, we will get amazing speech quality. I am preparing 1k Hindi language dataset for such model training along with already available English, and other latin datasets to train this model. The only limiting factor is to run MFA on such a large speech dataset, maybe I train GPT duration predictor on a fairly small subset of data and the main model on all data.
Right now it was trained on libritts-r, which is quite low like 1k hours at most. I am in the process of preparing 3tb dataset that would be used for training for next iteration. Steve Korshakov Sent via Superhuman @.> On Tue, Apr 2 2024 at 12:07 AM, Dongchao Yang @*.**@*.>> wrote: Hi, it is a great work. I want to ask, how many hours data you used to get the performance in your demo. — Reply to this email directly, view it on GitHub<#9>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AADB2E74SEWRMMCLJHCDJ6DY3JKMHAVCNFSM6AAAAABFSZSR4SVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIYTSNZSG43DMNA. You are receiving this because you are subscribed to this thread.Message ID: @.>
Good job! Expecting your better model trained on large-scale dataset.
This model produces good voice quality and prosody for such a small amount of data if we train this model on a good amount of multi-lingual dataset, we will get amazing speech quality. I am preparing 1k Hindi language dataset for such model training along with already available English, and other latin datasets to train this model. The only limiting factor is to run MFA on such a large speech dataset, maybe I train GPT duration predictor on a fairly small subset of data and the main model on all data.
Yes, I think the genereted voice is good. I am also try to reproduce it.
This model produces good voice quality and prosody for such a small amount of data if we train this model on a good amount of multi-lingual dataset, we will get amazing speech quality. I am preparing 1k Hindi language dataset for such model training along with already available English, and other latin datasets to train this model. The only limiting factor is to run MFA on such a large speech dataset, maybe I train GPT duration predictor on a fairly small subset of data and the main model on all data.
I am doing opposite, gpt is already trained on bigger dataset, but audio model is not. GPT is quite easy to train I didn't even bother to tweak anything.
Hi, it is a great work. I want to ask, how many hours data you used to get the performance in your demo.