Closed OswaldoBornemann closed 3 months ago
@erogol So I tried to train xtts v2 with multi speaker in Chinese. The evaluation loss seems unnormal.
I am also trying to train XTTS GPT model as a beginner. The documentation suggest that we can only train the model for cloning a single voice. My question is : can we train XTTS on a multilingual and multispeaker dataset because I would like to improve the general model quality in 3 differents languages (Spanish, Italian and German).
I know this isn't the best place to ask this question, but I know that you encountered the same problem.
@Thomcle So for now, I don't think xtts v2 support this mechanism, which allows for training with multispeaker when we set the speaker name with the audio name. I have tried this but the inference performance is not stable.
We are also trying to do this. I don't see why this is not possible theoretically if the dataset quality is good. I think what is important is that the model sees a mixture of various languages during training i.e. one minibatch language A, then language B and so on.
I think the solution would look something like changing this:
config_dataset = BaseDatasetConfig(
formatter="ljspeech",
dataset_name="ljspeech",
path="/raid/datasets/LJSpeech-1.1_24khz/",
meta_file_train="/raid/datasets/LJSpeech-1.1_24khz/metadata.csv",
language="en",
)
to this:
config_dataset = BaseDatasetConfig(
formatter="ljspeech",
dataset_name="ljspeech",
path="/raid/datasets/LJSpeech-1.1_24khz/",
meta_file_train="/raid/datasets/LJSpeech-1.1_24khz/metadata.csv",
language='auto'
)
and when language = 'auto' there is a way to detect which language it is while loading the dataset. I think there are many libraries which do this.
Some additional logic might be needed if we want to smartly make sure one minibatch has only one language. Although I am not sure how important that is. We should train and find out it.
Once the language is recognized and converted to tokens, the rest of the process is the same and should need no change.
One more solution that we are trying (training will start and loss will reduce without errors):
Go to this file:
/TTS/TTS/tts/datasets/__init__.py
def add_extra_keys(metadata, language, dataset_name):
for item in metadata:
# add language name
language = langid.classify(item['text'])[0]
if language!='en':
language = 'hi'
item["language"] = language
# add unique audio name
relfilepath = os.path.splitext(os.path.relpath(item["audio_file"], item["root_path"]))[0]
audio_unique_name = f"{dataset_name}#{relfilepath}"
item["audio_unique_name"] = audio_unique_name
return metadata
modify the function to something like this such that item['language'] is set by a language detection model or some custom logic instead of by the parameter you give during training.
Training is running currently, I will share the results here regardless of good/bad. Loss seems to have reduced significantly though.
@smallsudarshan So have you change the ljspeech
formatter as well? Because in the ljspeech
formatted, it will set the speaker name for all audios to ljspeech
by default. But this is not correct for the multi-speaker scenario. What do you think?
Hey @OswaldoBornemann I have not actually. I was going through the code and I think the speaker name is not being used anywhere for training. If you think it is, please let me know.
It is being used in the split_dataset
function here TTS/tts/datasets/__init__.py
however, and you might get slightly better eval metrics if you do this.
But my dataset is well balanced for speakers at the moment. So I have not added this.
After 5 epochs on total around 12 thousand samples of varying sizes of 2 speakers (did not pre-process too much i.e. no creating gaussian distribution of text lengths, accents etc.)
Train loss: Eval loss:
Here are few samples in English and Hindi. Seems to do a decent job given the data. For eg. my hindi audios have a very strong assamese (a particular place in India) accent that it has picked up. And the quality of the audios is very close to the data I have trained on.
According to my experience:
Also it is able to produce audio for sentences where both Hindi and English are mixed in the text, which is often the case. Although I have not explicitly trained on such sentences.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Can we train xtts v2 with original dataset which is multilingual and multispeaker?