Closed Tsangchi-Lam closed 11 months ago
Look at issue #41 to check the current progress.
You can, but with the current PL-BERT in English the quality won’t be as good it’s originally proposed to be. I’m working on multilingual PL-BERT now and it may take one or two months to finish.
See https://github.com/yl4579/StyleTTS/issues/10 for more details.
@yl4579 I trained styletts2 successfully using Chinese data, it sound very good. As wavlm-base-plus only supporting English, I used a Chinese Hubert model as SLM. When I want to train a model both for Chinese and English, I can not find a pre-trained model sopport Chinese and English at the same time. About SLM,Do you have any suggestions ?
You can try whisper encoder that was trained with multiple languages. You can also try multilingual wav2vec2.0: https://huggingface.co/facebook/wav2vec2-large-xlsr-53
@yl4579 I trained styletts2 successfully using Chinese data, it sound very good.
Did you use the English PL-BERT or did you train PL-BERT with Chinese data?
train PL-BERT with Chinese data
I trained styletts2 successfully using Chinese data, it sound very good. As wavlm-base-plus only supporting English, I used a Chinese Hubert model as SLM. When I want to train a model both for Chinese and English, I can not find a pre-trained model sopport Chinese and English at the same time. About SLM,Do you have any suggestions ?
What is your modeling unit? IPA or Pinyin?
@Moonmore The modeling unit is pinyin.
test.zip is a synth sample.
@Moonmore The modeling unit is pinyin.
test.zip is a synth sample.
Do you use the tone of pinyin when training Chinese PL-BERT? I believe StyleTTS uses F0 for Chinese tones. Can this PL-BERT with tones work with StyleTTS?
I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.
I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.
How many samples did you use to train Chinese PL-BERT?
@zhouyong64 I used about 84,000,000 text sentences to train the Chinese PL-BERT model.
@Moonmore The modeling unit is pinyin.
test.zip is a synth sample.
Sounds really good. I would like to ask if the pinyin unit you mentioned cannot be disassembled into phones? How to align plbert and text input?
@Moonmore
I used the same pinyin phonemes(sheng1 mu3 yun4 mu3) to train all the models. But when training asr, I used the phonemes without tones. if the pinyin uint cannot be disassembled, maybe the pinyin can be regard as an phoneme.
@zhouyong64 Sorry for the wrong information of yesterday, I tained PL-BERT with tones, and trained asr without tones.
I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.
@Moonmore I used the same pinyin phonemes(sheng1 mu3 yun4 mu3) to train all the models. But when training asr, I used the phonemes without tones. if the pinyin uint cannot be disassembled, maybe the pinyin can be regard as an phoneme.
@zhouyong64 Sorry for the wrong information of yesterday, I tained PL-BERT with tones, and trained asr without tones.
I trained Chinese PL-BERT without pinyin tones. But maybe PL-BERT with tones will also work normally, so you can try.
So can I understand that all text-related models are trained using the same phoneme unit, and the characteristics of each minimum pronunciation modeling unit are obtained. like(ni3 hao3 -> n i3 h ao3), The input length is 4, and the output length of the model is also 4. text encoder and the bert model. and how to construct the plbert label?
@Moonmore Yes, the output lengths of text encoder and bert are same as input lengths. About plbert label, you can read the logic of dataloader.py in plbert repo. It explained clearly.
@Moonmore Yes, the output lengths of text encoder and bert are same as input lengths. About plbert label, you can read the logic of dataloader.py in plbert repo. It explained clearly.
@hermanseu Thank you for your reply.
I want to train the Chinese model. Do you support mixed input in Chinese and English?