Closed pneumoman closed 4 years ago
Where are you seeing this behavior? Note that during training we set the validation data loader with the speakers from the training data loader. https://github.com/NVIDIA/mellotron/blob/master/train.py#L44
It's in inference.ipynb
when you calculate the male and female speakers, you only pass in the evaluation filelist which only contains 68 of the 123 speakers.
I had added code to retrieve a sample of the chosen speaker and found in most cases they were wildly different from generated voice.
I wonder if this has anything role in the men sounding like women.
Note that the first line defining speaker_ids
assumes the model was trained with the file list we provided by default, which only includes speakers with at least 5 minutes.
If you trained with the entire LibriTTS dataset, replace the first line with your training data.
That makes sense - so far I haven't added (in this environment) any additional speakers, I've only used the included files. It shouldn't make any difference - this should be true for the pre-trained model? The problem comes from how it's calculating the mellotron_id in the inference jupyter page. I just replaced the reference to the evaluation filelist with the training filelist and saw an immediate improvement in the correlation between the generated speaker and samples of the actual speaker. I'm not talking about the speaker used for the reference mel and rhythm, only the voice used for inference.
I finally see your point, thank you.
It's a matter of substituting the filepath with filelists/libritts_train_clean_100_audiopath_text_sid_shorterthan10s_atleast5min_train_filelist.txt
Please pull from master for the fix and let us know if you still find errors.
Does this also apply to the LJS model? Inferencing requires the Define Speakers Set block to be executed, however LJS only has 1 speaker.
@camjac251
TLDR:This only applies to the inference.ipynb jupyter page.
If you have switched to a different model such as LJS, you would need to adjust the speaker id's so that it matched, otherwise your speakers will not match, or in all likelihood with LJS you would get "index out of range".
Basically this comes down to you need to use the same speaker Ids you used for training as you do for inference. Rafael's post references how this is done.
The mapping of speakers to mellotron_ids in inference.ipynb is incorrect. it uses the evaluation filelist instead of the training filelist. The evaluation filelist does not have all the speaker ids in it, which results in the speaker ids not being matched correctly. I've attached a test program to demonstrate this as well as the output, but here are a few examples:
test_speaker_ids.out.txt
test_speaker_ids.py.txt