yl4579 / StyleTTS2

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
MIT License
4.9k stars 407 forks source link

Can I train the Chinese model? #70

Closed Tsangchi-Lam closed 11 months ago

Tsangchi-Lam commented 11 months ago

I want to train the Chinese model. Do you support mixed input in Chinese and English?

Kreevoz commented 11 months ago

Look at issue #41 to check the current progress.

yl4579 commented 11 months ago

You can, but with the current PL-BERT in English the quality won’t be as good it’s originally proposed to be. I’m working on multilingual PL-BERT now and it may take one or two months to finish.

yl4579 commented 11 months ago

See https://github.com/yl4579/StyleTTS/issues/10 for more details.

hermanseu commented 11 months ago

@yl4579 I trained styletts2 successfully using Chinese data, it sound very good. As wavlm-base-plus only supporting English, I used a Chinese Hubert model as SLM. When I want to train a model both for Chinese and English, I can not find a pre-trained model sopport Chinese and English at the same time. About SLM,Do you have any suggestions ?

yl4579 commented 11 months ago

You can try whisper encoder that was trained with multiple languages. You can also try multilingual wav2vec2.0: https://huggingface.co/facebook/wav2vec2-large-xlsr-53

zhouyong64 commented 11 months ago

@yl4579 I trained styletts2 successfully using Chinese data, it sound very good.

Did you use the English PL-BERT or did you train PL-BERT with Chinese data?

hermanseu commented 11 months ago

train PL-BERT with Chinese data

Moonmore commented 11 months ago

I trained styletts2 successfully using Chinese data, it sound very good. As wavlm-base-plus only supporting English, I used a Chinese Hubert model as SLM. When I want to train a model both for Chinese and English, I can not find a pre-trained model sopport Chinese and English at the same time. About SLM,Do you have any suggestions ?

What is your modeling unit? IPA or Pinyin?

hermanseu commented 11 months ago

@Moonmore The modeling unit is pinyin.

test.zip is a synth sample.

zhouyong64 commented 11 months ago

@Moonmore The modeling unit is pinyin.

test.zip is a synth sample.

Do you use the tone of pinyin when training Chinese PL-BERT? I believe StyleTTS uses F0 for Chinese tones. Can this PL-BERT with tones work with StyleTTS?

hermanseu commented 11 months ago

I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.

zhouyong64 commented 11 months ago

I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.

How many samples did you use to train Chinese PL-BERT?

hermanseu commented 11 months ago

@zhouyong64 I used about 84,000,000 text sentences to train the Chinese PL-BERT model.

Moonmore commented 11 months ago

@Moonmore The modeling unit is pinyin.

test.zip is a synth sample.

Sounds really good. I would like to ask if the pinyin unit you mentioned cannot be disassembled into phones? How to align plbert and text input?

hermanseu commented 11 months ago

@Moonmore
I used the same pinyin phonemes(sheng1 mu3 yun4 mu3) to train all the models. But when training asr, I used the phonemes without tones. if the pinyin uint cannot be disassembled, maybe the pinyin can be regard as an phoneme.

@zhouyong64 Sorry for the wrong information of yesterday, I tained PL-BERT with tones, and trained asr without tones.

I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.

Moonmore commented 11 months ago

@Moonmore I used the same pinyin phonemes(sheng1 mu3 yun4 mu3) to train all the models. But when training asr, I used the phonemes without tones. if the pinyin uint cannot be disassembled, maybe the pinyin can be regard as an phoneme.

@zhouyong64 Sorry for the wrong information of yesterday, I tained PL-BERT with tones, and trained asr without tones.

I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.

So can I understand that all text-related models are trained using the same phoneme unit, and the characteristics of each minimum pronunciation modeling unit are obtained. like(ni3 hao3 -> n i3 h ao3), The input length is 4, and the output length of the model is also 4. text encoder and the bert model. and how to construct the plbert label?

hermanseu commented 11 months ago

@Moonmore Yes, the output lengths of text encoder and bert are same as input lengths. About plbert label, you can read the logic of dataloader.py in plbert repo. It explained clearly.

Moonmore commented 11 months ago

@Moonmore Yes, the output lengths of text encoder and bert are same as input lengths. About plbert label, you can read the logic of dataloader.py in plbert repo. It explained clearly.

@hermanseu Thank you for your reply.