Rongjiehuang / Multi-Singer

PyTorch Implementation of Multi-Singer (ACM-MM'21)
MIT License
137 stars 21 forks source link

How to use Multi-Singer for English SVS #12

Open EduardoPach opened 1 year ago

EduardoPach commented 1 year ago

Hello there, @Rongjiehuang @a-ggghost @SunMail-hub

I'm trying to use Multi-Singer for SVS for English singers and I'm new to speech-related tasks. Thus I have a few questions about how to adapt this to English and about a few steps that are not clear to me.

1 - Encoder

From my understanding, the encoder that's being used is the same in here and I wonder if re-training would be necessary since in the Multi-Singer paper it was mentioned that the encoder was trained in several datasets with languages that include English (supposing that the provided checkpoint comes from that training)?

2 - Multi-Singer

Should I re-train Multi-Singer?

3 - Modified Fastspeech 2 + Multi-Singer for SVS

Is the modified version that was used in the paper in the linked repo of Fastspeech 2? Because the architecture in the repo is different from the one in the Multi-Singer paper in the Appendix.

Also, to generate the acoustic features with Fastspeech for an unseen singer how would the singer embedding be added without pre-training Fastspeech with the new singer?

I know that's a lot to ask, but could you give an example of how to use Fastspeech 2 + Multi-Singer to accomplish SVS for an unseen singer?

Thanks a lot in advance and sorry for the number of questions :P

Rongjiehuang commented 1 year ago

Hi, thanks for reaching out!

  1. Yes, you may finetune the encoder in the custom dataset for better performance.
  2. Multi-singer converts spectrogram to the singing voice, which means an acoustic model (text -> spectrogram) is needed for singing voice synthesis. Thus, it could be better to fine-tune multi-singer based on the spectrograms generated from texts.
  3. For the modified version of FastSpeech, please try Diffsinger. As training data include a variety of singers, and thus zero-shot generalization is possible, where we add the singer identity embedding to the model. In addition, a few shot adaptation with the new singer is also recommended.
EduardoPach commented 1 year ago

@Rongjiehuang thanks for the guidance!

Is there any limitation on the length of the generated singing voice from Multi-Singer?

I've tried using Fastspeech 2 to generate the mel with one of their pre-trained models and using that in Multi-Singer just to check the result, but for some reason Multi-Singer generated 2s audio, whereas Fastspeech generated 11s audio

netpi commented 1 year ago

hi @Rongjiehuang , is there Opensinger Duration Files (like MFA processed) available?

Rongjiehuang commented 1 year ago

@Rongjiehuang thanks for the guidance!

Is there any limitation on the length of the generated singing voice from Multi-Singer?

I've tried using Fastspeech 2 to generate the mel with one of their pre-trained models and using that in Multi-Singer just to check the result, but for some reason Multi-Singer generated 2s audio, whereas Fastspeech generated 11s audio

FastSpeech 2 and Multi-singer respectively work like this: FastSpeech 2 generates 1 frame mel-spectrogram, following which Multi-singer conditions on it and generates 256 data points according to the hop size.

Rongjiehuang commented 1 year ago

hi @Rongjiehuang , is there Opensinger Duration Files (like MFA processed) available?

Hi, the duration file has not been released. Please use MFA for it.