NVIDIA / mellotron

Mellotron: a multispeaker voice synthesis model based on Tacotron 2 GST that can make a voice emote and sing without emotive or singing training data
BSD 3-Clause "New" or "Revised" License
854 stars 187 forks source link

The speaker ids are misaligned in inference.ipynb #59

Closed pneumoman closed 4 years ago

pneumoman commented 4 years ago

The mapping of speakers to mellotron_ids in inference.ipynb is incorrect. it uses the evaluation filelist instead of the training filelist. The evaluation filelist does not have all the speaker ids in it, which results in the speaker ids not being matched correctly. I've attached a test program to demonstrate this as well as the output, but here are a few examples:

libriTTS id eval LibriTTS id train Mellotron Id eval Mellotron Id train Speaker Name eval Speaker Name train
40 40 26 40 Vicki Barbour Vicki Barbour
78 78 71 103 Hugh McGuire Hugh McGuire
87 83 83 111 Rosalind Wills Catharine Eastman
118 87 2 119 Alex Buie Rosalind Wills

test_speaker_ids.out.txt

test_speaker_ids.py.txt

rafaelvalle commented 4 years ago

Where are you seeing this behavior? Note that during training we set the validation data loader with the speakers from the training data loader. https://github.com/NVIDIA/mellotron/blob/master/train.py#L44

pneumoman commented 4 years ago

It's in inference.ipynb

when you calculate the male and female speakers, you only pass in the evaluation filelist which only contains 68 of the 123 speakers.

I had added code to retrieve a sample of the chosen speaker and found in most cases they were wildly different from generated voice.

I wonder if this has anything role in the men sounding like women.

image

rafaelvalle commented 4 years ago

Note that the first line defining speaker_ids assumes the model was trained with the file list we provided by default, which only includes speakers with at least 5 minutes. If you trained with the entire LibriTTS dataset, replace the first line with your training data.

pneumoman commented 4 years ago

That makes sense - so far I haven't added (in this environment) any additional speakers, I've only used the included files. It shouldn't make any difference - this should be true for the pre-trained model? The problem comes from how it's calculating the mellotron_id in the inference jupyter page. I just replaced the reference to the evaluation filelist with the training filelist and saw an immediate improvement in the correlation between the generated speaker and samples of the actual speaker. I'm not talking about the speaker used for the reference mel and rhythm, only the voice used for inference.

rafaelvalle commented 4 years ago

I finally see your point, thank you. It's a matter of substituting the filepath with filelists/libritts_train_clean_100_audiopath_text_sid_shorterthan10s_atleast5min_train_filelist.txt

Please pull from master for the fix and let us know if you still find errors.

camjac251 commented 4 years ago

Does this also apply to the LJS model? Inferencing requires the Define Speakers Set block to be executed, however LJS only has 1 speaker.

pneumoman commented 4 years ago

@camjac251 TLDR:This only applies to the inference.ipynb jupyter page.
If you have switched to a different model such as LJS, you would need to adjust the speaker id's so that it matched, otherwise your speakers will not match, or in all likelihood with LJS you would get "index out of range".
Basically this comes down to you need to use the same speaker Ids you used for training as you do for inference. Rafael's post references how this is done.