NVIDIA / mellotron

Mellotron: a multispeaker voice synthesis model based on Tacotron 2 GST that can make a voice emote and sing without emotive or singing training data
BSD 3-Clause "New" or "Revised" License
854 stars 187 forks source link

Using own text to generate speech using Mellotron #62

Closed astricks closed 4 years ago

astricks commented 4 years ago

Hi,

I'm trying to generate speech using my own text, with style (pitch contour, rhythm, f0) to be transferred from an input wav file. I've been trying to modify the code in model.py but I'm not able to inject in my own text.

Could you please give me some guidance?

astricks commented 4 years ago

Specifically, I'm trying to generate my own embedded_text and use that instead of the one generated from the input audio. This might be related to https://github.com/NVIDIA/mellotron/issues/57.

astricks commented 4 years ago

More specifically, it seems I'm not able to match some matrix dimensions. I'm not sure what I need to pad.

/data/mellotron/model.py in forward(self, attention_hidden_state, memory, processed_memory, attention_weights_cat, mask, attention_weights) 94 95 attention_weights = F.softmax(alignment, dim=1) ---> 96 attention_context = torch.bmm(attention_weights.unsqueeze(1), memory) 97 attention_context = attention_context.squeeze(1) 98

RuntimeError: invalid argument 6: wrong matrix size at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:534

CookiePPP commented 4 years ago

@astricks What you're asking for is slight madness :smile: Try inputting text of the same length as the original text and see what happens.

astricks commented 4 years ago

I got something to work. Seems like the word "en-han-ces" doesn't play well with the dictionary, removed that change and it worked. @CookiePPP could you expand a bit on the madness part? 😄 What I want to do is generate sentences spoken in the style of reference sentence. My understanding is that Tacotron does prosody transfer, and mellotron will give me rhythmic transfer as well.

CookiePPP commented 4 years ago

@astricks The source rhythm is an attention map that is ran over the text (encoder_outputs). I don't expect you to be able to change the input and have it still sound natural, though it would work for same length inputs. Also f0 would be teacher forced so I'd expect that to also effect the naturalness when changing the text.

If you try it (with same length inputs), can you upload the audio file you generate as well as the original and updated texts? I'm curious what it sounds like

astricks commented 4 years ago

@CookiePPP I used the source example sentence and changed the text a little to generate audio below.

Source text: "exploring the expanses of space to keep our planet safe" Modified text: "exploding the expanses of grace to keep our planet sane"

https://drive.google.com/open?id=1w-i_T9hzwzgXOVVouXVP039YuD02ROP3

Not a bad transfer, the sentences are very similar though.

CookiePPP commented 4 years ago

@astricks Sounds pretty good, now I wonder how far it can be changed? :smile:

astricks commented 4 years ago

Not too much, even if i try to keep the same number of syllables. I think I can close this bug as resolved. Thanks the help @CookiePPP!