yl4579 / StyleTTS2

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
MIT License
4.8k stars 392 forks source link

Awesome in english but no support for other languages - please add an example for another language (german, italian, french etc) #41

Open cmp-nct opened 10 months ago

cmp-nct commented 10 months ago

The readme makes it sound very simple: "Replace bert with xphonebert" Looking a bit closer looks like it's quite a feat to make StyleTTS2 talk in non-english languages (https://github.com/yl4579/StyleTTS2/issues/28)

StyleTTS2 looks like the best approach we have right now, but only english is a killer for many as it means any app will be limited to english without prospect for other users in sight.

Some help to get this going in foreign languages would be awesome.

It appears we need to change inference code and re-train text and phonetics. Any demo/guide would be great

yl4579 commented 10 months ago

The repo so far is a research project and its main purpose serves more as a proof of concept for the paper than a full-fledged open source project. I agree that PL-BERT is the major obstacle to generalize to other languages, but training large-scale language models particularly on multiple languages can be very challenging. With the resources I have in the school, training PL-BERT on English only corpus with 3 A40 took me a month, with all the ablation studies and experiment, I spent an entire summer on this project only for a single language.

I'm not affiliated with any company and I'm only a PhD student, and the GPU resources in our lab need to be prioritized for new research projects. I don't think I will have resources to train a multi-lingual PL-BERT model at the time being, so PL-BERT probably is not the best approach to multilingual models for StyleTTS 2.

I have never tried XPhoneBERT myself, but it seems to be a promising alternative PL-BERT. The only problem of it is that it uses a different phonemizer, which can also be related to #40 . The current phonemizer was taken from VITS, which also incurs license issues (MIT vs. GPL). It would be great if someone could help to switch the phoneimzer and BERT model to things like XPhoneBERT that is compatible with MIT license and also supports multiple languages.

The basic idea is to re-train the ASR model (https://github.com/yl4579/AuxiliaryASR) using the phonemizer of XPhoneBERT, and replace PL-BERT with XPhoneBERT and re-train the model from scratch. Since the models, especially the model LibriTTS, took about 2 weeks to train on 4 A100, I do not think I have enough GPU resources to work on this for the time being. If anyone is willing to sponsor GPUs and datasets for either multilingual PL-BERT or XPhoneBERT StyleTTS 2, I'm happy to extend this project towards the multilingual directions.

cmp-nct commented 10 months ago

I think it would be doable to get the GPU time, 1 week of 8xA100 maybe in exchange of naming the resulting model after the sponsor. One of the cloud providers might be interested, or some guys from the ML discords who train a lot might have it spare. I was offered GPU time once, could ask the guy. But without datasets that wouldn't help That said: If you need GPU time let me know, I'll ask

Datasets: German: TTS dataset from a university (high quality, 6 main speakers, I think 40-50 hours of studio quality recordings) https://opendata.iisys.de/dataset/hui-audio-corpus-german/ (https://github.com/iisys-hof/HUI-Audio-Corpus-German) https://github.com/thorstenMueller/Thorsten-Voice (11 hours, one person)

Italian: TTS dataset, LJSpeech affiliated ? https://huggingface.co/datasets/z-uo/female-LJSpeech-italian https://huggingface.co/datasets/z-uo/male-LJSpeech-italian

Multilingual: https://www.openslr.org/94/ (audiobook based libritts) https://github.com/freds0/CML-TTS-Dataset (more than 3000 hours, CS licensed)

Sidenote: For detecing unclean audio, possibly "CLAP" from Laion could be used.

yl4579 commented 10 months ago

Multilingual speech datasets are more difficult to get than language datasets. XPhoneBERT for example was trained entirely on Wikipedia in 100+ languages, but getting 100+ languages of speech data with transcriptions is more difficult. XTTS has multilingual supports but the data used seems private. I believe the creator @erogol was once interested in StyleTTS but did not proceed to integrate this into Coqui API for some reason. It would be great if he could help for multilingual supports. I will ping him to see if he is still interested.

cmp-nct commented 10 months ago

I found quite good datasets for Italian and German, will take another look for more. Will update the previous comment. About how much data (length, # of speakers) is needed when training ?

yl4579 commented 10 months ago

If you want cross-lingual generalization, I think each language should be at least 100 hours. The data you provide probably is good for a single speaker model, but not enough for zero-shot models like XTTS. It is not feasible to get a model like that with publicly available data. We probably have to rely on something like multilingual librispeech (https://www.openslr.org/94/) and use some speech restoration models to remove bad samples. This is not a single person's effort, so everyone else is welcome to contribute.

mzdk100 commented 10 months ago

It's a pity not supporting Chinese.

hobodrifterdavid commented 10 months ago

I can make a 8x 3090 (24GB) machine available, if it's of use. 2x Xeon E5-2698 v3 cpus, 128GB ram. Alternatively: a 4x 3090 box with nvlinks, Epyc 7443p, 256GB, pcie 4.0. Send a mail to dioco@dioco.io

tosunozgun commented 10 months ago

I can support for training turkish model, just need a help for training pl-bert for turkish wikipedia dataset.

yl4579 commented 10 months ago

@hobodrifterdavid Thanks so much for your help. What you have now is probably good for multilingual PL-BERT training as long as you can keep this machine running for at least a couple of months or so. Just sent you an email for multilingual PL-BERT training.

yl4579 commented 10 months ago

I think the GPUs provided by @hobodrifterdavid would be a great start for multilingual PL-BERT training. Before proceeding though, I need some people who speak as many languages as possible (hopefully also have some knowledge in IPA) to help with the data preparation. I only speak English, Chinese and Japanese, so I can only help with these 3 languages.

My plan is to use this multilingual BERT tokenizer: https://huggingface.co/bert-base-multilingual-cased, tokenize the text, get the corresponding tokens, use phonemizer to get the corresponding phonemes, and align the phonemes with tokens. Since this tokenizer is subword, we cannot predict the subword grapheme tokens. So my idea is instead of predicting the grapheme tokens (which is not a full grapheme anyway, and we cannot really align half of a grapheme to some of its phonemes, like in English "phonemes" can be tokenized into phone#, #me#, #s, but the actual phonemes of it is /ˈfəʊniːmz/, which cannot be aligned perfectly with either phone# or #me# or #s) we predict the contextualized embeddings from a pre-trained BERT model.

For example, for the sentence "This is a test sentence", we get 5 tokens [this, is, a, test, sen#, #tence] and its corresponding graphemes. Particularly, these [sen#, #tence] two tokens correspond to ˈsɛnʔn̩ts. The goal is to map each of the grpaheme representation in ˈsɛnʔn̩ts to the average contextualized BERT embeddings of [sen#, #tence]. This requires running the teacher BERT model, but we can extract the contextualized BERT embeddings online (during training) and maximize the cosine similarity of the predicted embeddings of these words and the teacher model (multilingual BERT).

Now the biggest challenge is aligning the tokenizer output to the graphemes, which may require some expertise in the specific languages. There could be potential quirks, inaccuracy or traps for certain languages. For example, phonemizer doesn't work with Japanese and Chinese directly, you have to first phonemize the grapheme into alphabets and then use phonemizer. The characters in these languages do not always have the same pronunciations depending on the context, so expertise in these languages is needed when doing NLP with them. To make sure the data preprocessing goes as smooth and accurate as possible, any help from those who speaks any language in this list (or knows some linguistics about these languages) is greatly appreciated.

SoshyHayami commented 10 months ago

I think the GPUs provided by @hobodrifterdavid would be a great start for multilingual PL-BERT training. Before proceeding though, I need some people who speak as many languages as possible (hopefully also have some knowledge in IPA) to help with the data preparation. I only speak English, Chinese and Japanese, so I can only help with these 3 languages.

My plan is to use this multilingual BERT tokenizer: https://huggingface.co/bert-base-multilingual-cased, tokenize the text, get the corresponding tokens, use phonemizer to get the corresponding phonemes, and align the phonemes with tokens. Since this tokenizer is subword, we cannot predict the subword grapheme tokens. So my idea is instead of predicting the grapheme tokens (which is not a full grapheme anyway, and we cannot really align half of a grapheme to some of its phonemes, like in English "phonemes" can be tokenized into phone#, #me#, #s, but the actual phonemes of it is /ˈfəʊniːmz/, which cannot be aligned perfectly with either phone# or #me# or #s) we predict the contextualized embeddings from a pre-trained BERT model.

For example, for the sentence "This is a test sentence", we get 5 tokens [this, is, a, test, sen#, #tence] and its corresponding graphemes. Particularly, these [sen#, #tence] two tokens correspond to ˈsɛnʔn̩ts. The goal is to map each of the grpaheme representation in ˈsɛnʔn̩ts to the average contextualized BERT embeddings of [sen#, #tence]. This requires running the teacher BERT model, but we can extract the contextualized BERT embeddings online (during training) and maximize the cosine similarity of the predicted embeddings of these words and the teacher model (multilingual BERT).

Now the biggest challenge is aligning the tokenizer output to the graphemes, which may require some expertise in the specific languages. Any help from those who speaks any language in this list (or knows some linguistics about these languages) is appreciated.

I can speak Persian, Japanese and a little bit of Arabic. (Have a friend fleunt in this as well). I would very much like to help you with this. I'm also gathering Labeled Speech data for these languages as of right now. (I have a little less than 100 hours for Persian and a bit with the other two). So, Count me in please.

yl4579 commented 10 months ago

@SoshyHayami Thanks for your willingness to help.

Fortunately, I think most other languages that have whitespaces between words can be handled with the same logic. The only supported languages that do not have space between them are Chinese, Japanese (including Korean Hanja rarely), and Burmese. These are probably languages that need to be handled with their own logics. I can handle the first two languages, and we just need someone to handle the other two (Korean Hanja and Burmese).

mzdk100 commented 10 months ago

It would be great if it could support Chinese language! I am a native Chinese, and I don't know what help I can provide?

yl4579 commented 10 months ago

Maybe I’ll create a new branch in the PL-BERT repo for multilingual processing scripts. Chinese and Japanese definitely needs to be processed separately with their own logics. @mzdk100 If you have some good Chinese phonemizer (Chinese characters to pinyin), you are welcome to contribute.

SoshyHayami commented 10 months ago

in the case of Japanese, since it already has Kana which is basically an alphabet, can't we simply restrict it to just that for now?(Kana and Romaji should be easier to phonemize if I'm not mistaken here.) Sorry it might be a stupid Idea but I was thinking about if we had another language model that would recognize the correct pronunciations based on the context and then would convert the text (and the converted text would be handed over to the phonemizer), maybe it could make things a bit easier here.

though It'll probably make inference a torture as well on low-performance devices.

mzdk100 commented 10 months ago

@yl4579 There are two main libraries for handling Chinese tokens, jieba and pypinyin. Jieba is based on Chinese word segmentation mode, while pypinyin is based on Chinese pinyin mode.

pip3 install jieba pypinyin
from pypinyin import lazy_pinyin, pinyin, Style
print(pinyin('朝阳')) # [['zhāo'], ['yáng']]
print(pinyin('朝阳', heteronym=True)) # [['zhāo', 'cháo'], ['yáng']]
print(pinyin('聪明的小兔子')) # ['cong', 'ming', 'de', 'xiao', 'tu', 'zi']
print(lazy_pinyin('聪明的小兔子', style=Style.TONE3)) # ['cong1', 'ming2', 'de', 'xiao3', 'tu4', 'zi']

There are many Chinese characters, and using pinyin can greatly reduce the number of vocabulary and potentially make the model smaller.

import jieba
print(list(jieba.cut('你好,我是中国人'))) # ['你好', ',', '我', '是', '中国', '人']
print(list(jieba.cut_for_search('你好,我是中国人'))) # ['你好', ',', '我', '是', '中国', '人']

If using word segmentation mode, the model can learn more natural language features, but the Chinese vocabulary is very large, and perhaps the model will be super large, and the computational power requirements are unimaginable. It is highly recommended to use Pinyin mode, as the converted text looks more like English without the need to change too many training codes.

print(' '.join(lazy_pinyin('聪明的小兔子', style=Style.TONE3))) # 'cong1 ming2 de xiao3 tu4 zi'
cmp-nct commented 10 months ago

If german ears are needed, I'd be happy to lend

nicognaW commented 10 months ago

https://github.com/rime/rime-terra-pinyin/blob/master/terra_pinyin.dict.yaml

From the industrial world, this is the characters-to-pinyin solution that the well-known input method editor Rime uses.

dsplog commented 10 months ago

any help from those who speaks any language in this list (or knows some linguistics about these languages) is greatly appreciated

keen to extend this to malayalam, dravidian language spoken in south india. will help for that.

rjrobben commented 10 months ago

I hope Cantonese or Traditional Chinese is also considered when training the multilingual system, I can definitely help regarding this language. Is there any cooperation channel for this task?

fakerybakery commented 10 months ago

Multilingual speech datasets are more difficult to get than language datasets. XPhoneBERT for example was trained entirely on Wikipedia in 100+ languages, but getting 100+ languages of speech data with transcriptions is more difficult. XTTS has multilingual supports but the data used seems private. I believe the creator was once interested in StyleTTS but did not proceed to integrate this into Coqui API for some reason. It would be great if he could help for multilingual supports. I will ping him to see if he is still interested.

Personally, I do not support Coqui TTS. XTTS is not open-sourced according to OSI because of its ultra-restrictive license. I believe that the future of TTS lies in open-source models such as StyleTTS.

yl4579 commented 10 months ago

@rjrobben I have created a slack channel for this multilingual PL-BERT: https://join.slack.com/t/multilingualstyletts2/shared_invite/zt-2805io6cg-0ROMhjfW9Gd_ix_FJqjGmQ

yl4579 commented 10 months ago

Also https://github.com/yl4579/PL-BERT/issues/22 this maybe helpful, if anyone could try it out.

fakerybakery commented 10 months ago

@yl4579 Thanks for making the slack channel! Are you planning to make a slack channel for general StyleTTS 2-related discussions as well? Just because GH Discussions isn't realtime?

yl4579 commented 10 months ago

@fakerybakery I can make this channel generally StyleTTS2-related if it is better. I can change the title to StyleTTS 2 instead.

fakerybakery commented 10 months ago

Great, thanks! Maybe make one chatroom just about BERT instead?

yl4579 commented 10 months ago

Yeah I've already done that. There's a channel about multilingual PLBERT.

fakerybakery commented 10 months ago

Great! Are you planning to add the link to the README?

yl4579 commented 10 months ago

It expires every 30 days I don't know if there's a better to get a permanent link.

fakerybakery commented 10 months ago

I think there's a way to set it to never expire, right?

yl4579 commented 10 months ago

Yes I did that. Added to README.

yl4579 commented 10 months ago

It seems I couldn't get any data that was not already processed by Huggingface:

Using custom data configuration 20230701.bn-date=20230701,language=bn
Old caching folder /root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559 for dataset wikipedia exists but not data were found. Removing it. 
Downloading and preparing dataset wikipedia/20230701.bn to file:///root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 5152.71it/s]
Extracting data files: 100%|████████████████████| 1/1 [00:00<00:00, 2211.02it/s]
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 7667.83it/s]
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/root/.local/share/jupyter/runtime/kernel-5364407a-2c52-4d34-99f7-2eb08d56bdd7.json']
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/root/.local/share/jupyter/runtime/kernel-5364407a-2c52-4d34-99f7-2eb08d56bdd7.json']
ERROR:apache_beam.runners.common:Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: file:///root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1435, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/transforms/core.py", line 1961, in <lambda>
    wrapper = lambda x, *args, **kwargs: [fn(x, *args, **kwargs)]
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/iobase.py", line 1140, in <lambda>
    lambda _, sink: sink.initialize_write(), self.sink)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/options/value_provider.py", line 193, in _f
    return fnc(self, *args, **kwargs)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filebasedsink.py", line 173, in initialize_write
    tmp_dir = self._create_temp_dir(file_path_prefix)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filebasedsink.py", line 178, in _create_temp_dir
    base_path, last_component = FileSystems.split(file_path_prefix)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filesystems.py", line 151, in split
    filesystem = FileSystems.get_filesystem(path)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filesystems.py", line 103, in get_filesystem
    raise ValueError(
ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: file:///root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia-train

If anyone knows how to deal with this problem please let me know. I have searched online and couldn't find any solution so far. Closest issue I found so far with no solution: https://github.com/huggingface/datasets/issues/6147

The code I used:

from datasets import load_dataset
dataset = load_dataset('wikipedia', date="20230701", language="bn", split='train', beam_runner='DirectRunner')
yl4579 commented 10 months ago

It seems I couldn't get any data that was not already processed by Huggingface:

Using custom data configuration 20230701.bn-date=20230701,language=bn
Old caching folder /root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559 for dataset wikipedia exists but not data were found. Removing it. 
Downloading and preparing dataset wikipedia/20230701.bn to file:///root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 5152.71it/s]
Extracting data files: 100%|████████████████████| 1/1 [00:00<00:00, 2211.02it/s]
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 7667.83it/s]
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/root/.local/share/jupyter/runtime/kernel-5364407a-2c52-4d34-99f7-2eb08d56bdd7.json']
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/root/.local/share/jupyter/runtime/kernel-5364407a-2c52-4d34-99f7-2eb08d56bdd7.json']
ERROR:apache_beam.runners.common:Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: file:///root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1435, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/transforms/core.py", line 1961, in <lambda>
    wrapper = lambda x, *args, **kwargs: [fn(x, *args, **kwargs)]
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/iobase.py", line 1140, in <lambda>
    lambda _, sink: sink.initialize_write(), self.sink)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/options/value_provider.py", line 193, in _f
    return fnc(self, *args, **kwargs)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filebasedsink.py", line 173, in initialize_write
    tmp_dir = self._create_temp_dir(file_path_prefix)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filebasedsink.py", line 178, in _create_temp_dir
    base_path, last_component = FileSystems.split(file_path_prefix)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filesystems.py", line 151, in split
    filesystem = FileSystems.get_filesystem(path)
  File "/root/anaconda3/envs/BERT/lib/python3.8/site-packages/apache_beam/io/filesystems.py", line 103, in get_filesystem
    raise ValueError(
ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: file:///root/.cache/huggingface/datasets/wikipedia/20230701.bn-date=20230701,language=bn/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia-train

If anyone knows how to deal with this problem please let me know. I have searched online and couldn't find any solution so far. Closest issue I found so far with no solution: huggingface/datasets#6147

The code I used:

from datasets import load_dataset
dataset = load_dataset('wikipedia', date="20230701", language="bn", split='train', beam_runner='DirectRunner')

solved by using dataset = load_dataset('wikimedia/wikipedia', "20230701.bn", split='train'), this is a preprocessed dataset: https://huggingface.co/datasets/wikimedia/wikipedia

yl4579 commented 10 months ago

UPDATE: Ended up git clone the subfolder and load them locally.

Can anyone download the dataset though? It keeps downloading the entire dataset which ends in failure (connection issue) and if re-run it will start from the beginning so the process will never finish.

Does anyone know how to load a subset of a single language? dataset = load_dataset('wikimedia/wikipedia', "20230701.bn", split='train') doesn't work.

SoshyHayami commented 10 months ago

oh yeah, that dataset is a nightmare to load, I don't why but I only could load it with Google Colab instead of my own PC last time I tried loading it. as you mentioned, git cloning and loading them locally should work.

yl4579 commented 10 months ago

Unfortunately the machine sponsored by @hobodrifterdavid is down. I managed to write the data preprocessing script for most languages. My lab is currently short of GPUs as we are working on some projects using LLMs. The CPUs can still be used so I'm now running the preprocessing on my lab's machines because it does not use any GPU resource. Once it is done I can upload it to more stable GPU machines that some can sponsor (if any).

fakerybakery commented 10 months ago

Colab is probably too weak, right? I think Paperspace charges around $0.50/hr for A100, not sure if that's too much

yl4579 commented 10 months ago

@fakerybakery It is back online now but it was rebooted. I think it’s quite unstable given how often it happens (within a day I started to work on it). Colab is too expensive and also no multi-GPU support. I may just stick to this one and monitor the process when I get to training. People who have extra time on it can also help with it and ask @hobodrifterdavid for access.

yl4579 commented 10 months ago

I have preprocessed 70 languages so far, and most look good upon manual inspections (validated using wiktionary). The only ones left are zh, zh-yue, ja, my (Chinese, Cantonese, Japanese and Burmese).

There are a few languages that are broken. If any of you speaks any of the following languages, please join the slack space and hep in the multilingual-PL-BERT channel if possible.

Hakka and Vietnamese seem like an easy fix, just strip all the numbers in the phonemized results are fine. Korean and Malay also seem an easy fix, but I don't know if "-" means anything for these languages and whether removing them is okay. Thai seems totally broken so it has to be handled separately just like the remaining four languages.

The rest may be fixed by charsiuG2P, but charsiuG2P can't handle numbers or dates etc., which can be problematic.

ismail-yussuf commented 10 months ago

hey guys i'm working on a project for a TTS model that is good with Somali. I don't see that many TTS models that support Somali at all. I'm collecting high quality data for it as we speak.

@yl4579 is it fine by you guys if we also add in Somali into the mix? I believe based off of @yl4579 descriptions of good qualifiers for this Somali would work well as its written in English letters just like Spanish.

Also if we need more GPU's would me renting some cloud GPU's on run pod be beneficial? I'm willing to help out on that end as well.

fakerybakery commented 10 months ago

Hi @ismail-yussuf, we're working on adding more languages. If you're interested in this, please join the Slack channel!

yl4579 commented 10 months ago

@yl4579 There are two main libraries for handling Chinese tokens, jieba and pypinyin. Jieba is based on Chinese word segmentation mode, while pypinyin is based on Chinese pinyin mode.

pip3 install jieba pypinyin
from pypinyin import lazy_pinyin, pinyin, Style
print(pinyin('朝阳')) # [['zhāo'], ['yáng']]
print(pinyin('朝阳', heteronym=True)) # [['zhāo', 'cháo'], ['yáng']]
print(pinyin('聪明的小兔子')) # ['cong', 'ming', 'de', 'xiao', 'tu', 'zi']
print(lazy_pinyin('聪明的小兔子', style=Style.TONE3)) # ['cong1', 'ming2', 'de', 'xiao3', 'tu4', 'zi']

There are many Chinese characters, and using pinyin can greatly reduce the number of vocabulary and potentially make the model smaller.

import jieba
print(list(jieba.cut('你好,我是中国人'))) # ['你好', ',', '我', '是', '中国', '人']
print(list(jieba.cut_for_search('你好,我是中国人'))) # ['你好', ',', '我', '是', '中国', '人']

If using word segmentation mode, the model can learn more natural language features, but the Chinese vocabulary is very large, and perhaps the model will be super large, and the computational power requirements are unimaginable. It is highly recommended to use Pinyin mode, as the converted text looks more like English without the need to change too many training codes.

print(' '.join(lazy_pinyin('聪明的小兔子', style=Style.TONE3))) # 'cong1 ming2 de xiao3 tu4 zi'

I found the quality is not very good, for example:

pinyin("他把这个还我了")

The output is:

[['tā'], ['bǎ'], ['zhè'], ['gè'], ['hái'], ['wǒ'], ['le']]

In this case "还" should be "huan" instead of "hai", which is a verb. Another case is

pinyin("不得了了")

The output is:

[['bù'], ['dé'], ['le'], ['le']]

The first "了" is in the word "得了" which is an adverb and should be read as "de liao", while the second "了" is a particle that specifies the tense. The library clearly can't tell the difference.

mzdk100 commented 10 months ago

Indeed, the output result is incorrect.

------------------ 原始邮件 ------------------ 发件人: "Aaron (Yinghao) @.>; 发送时间: 2023年11月30日(星期四) 下午4:45 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [yl4579/StyleTTS2] Awesome in english but no support for other languages - please add an example for another language (german, italian, french etc) (Issue #41)

@yl4579有两个主要的汉字库,解巴和拼音。解吧是基于汉语分词模式,而拼音是基于汉语拼音模式。 pip3 安装jieba Pypinyin 从拼音进口懒惰_拼音,拼音,风格新闻中心pinyin朝阳#['zh'o'],['yeng']新闻中心pinyin朝阳,heteronym=真实#[]()新闻中心pinyin“聪明的小兔子”#['cong','ming','de','xiao','tu','zi']打印(懒惰_拼音('聪明的小兔子',风格=风格.音调3))# ['丛1', '明2', '德', '小3', 'tu4', '子']

有很多汉字,使用拼音可以大大减少词汇量,并有可能使模型更小。 进口结霸打印(列表(结霸.切('你好,我是中国人')))# ['你好', ',', '我', '是', '中国', '人']打印(列表(结霸.剪切用于搜索('你好,我是中国人')))# ['你好', ',', '我', '是', '中国', '人']

如果采用分词模式,模型可以学习到更多的自然语言特征,但是中文词汇量非常大,也许模型会超级大,对计算能力的要求是无法想象的。强烈建议使用拼音模式,因为转换后的文本看起来更像英文,而不需要改变太多的训练码。 打印(' '.加入(懒惰_拼音('聪明的小兔子',风格=风格.音调3)))# '小3 tu4 zi的丛1 ming2'

我发现质量不是很好,例如: 拼音(“他把这个还我了”)

输出的结果是: ['tā'],['bǎ'],['zhè'],['gè'],['hái'],['wǒ'],['le'] 在这种情况下“还”应该是“欢”而不是“海”。另一种情况是 拼音(“不得了了”)

输出的结果是: [‘不’],【‘得’,【得’】,【【得】】

直接回复这封邮件,在GitHub上查看,或取消订阅. 你收到这个是因为你被提到了。消息ID:< yl4579 / StyleTTS2 /议题/ 41 / @.***和>

duchengxian commented 10 months ago
g2pw import G2PWConverter
from multiprocessing import Process, freeze_support

if __name__ == '__main__':
    freeze_support()

    conv = G2PWConverter(style='pinyin', enable_non_tradional_chinese=True)
    print(conv('他还把这个还我了。不得了了。'))

got better result: [['ta1', 'hai2', 'ba3', 'zhe4', 'ge5', 'huan2', 'wo3', 'le5', None, 'bu4', 'de2', 'liao3', 'le5', None]]

duchengxian commented 10 months ago

这个库分辨力还是不错,了了的另一个不常用用法也能区分: 小时了了,大未必佳。[['xiao3', 'shi2', 'liao3', 'liao3', None, 'da4', 'wei4', 'bi4', 'jia1', None]]

mzdk100 commented 10 months ago

Very good.

yl4579 commented 10 months ago

@duchengxian This looks very good. I think the dataset preparation is almost done. I will upload all the data to huggingface and wait for @hobodrifterdavid to respond and set up the 8 GPU machine for training.

dsplog commented 10 months ago

@yl4579 : can you plz take a look at https://github.com/yl4579/PL-BERT/pull/27 , added the code-mods needed for support malayalam based on bert-base-multilingual-cased

ardha27 commented 10 months ago

I have preprocessed 70 languages so far, and most look good upon manual inspections (validated using wiktionary). The only ones left are zh, zh-yue, ja, my (Chinese, Cantonese, Japanese and Burmese).

There are a few languages that are broken. If any of you speaks any of the following languages, please join the slack space and hep in the multilingual-PL-BERT channel if possible.

  • bn: Bengali (phonemizer seems less accurate than charsiuG2P)
  • cs: Czech (same as above)
  • hak: Hakka (tones are phonemized and has "-", need fix)
  • ko: Korean (has "-" for some reason for words)
  • ms: Malay (has "-" for some reason)
  • ru: Russian (phonemizer is inaccurate for some phonemes, like tʃ/ʒ should be t͡ɕ/ʐ)
  • th: Thai (phonemizer totally broken)
  • uk: Ukrainian (phonemizer is worse than charsiuG2P)
  • vi: Vietnamese (has tones)

Hakka and Vietnamese seem like an easy fix, just strip all the numbers in the phonemized results are fine. Korean and Malay also seem an easy fix, but I don't know if "-" means anything for these languages and whether removing them is okay. Thai seems totally broken so it has to be handled separately just like the remaining four languages.

The rest may be fixed by charsiuG2P, but charsiuG2P can't handle numbers or dates etc., which can be problematic.

is there anything that i can help to add indonesian language?

fakerybakery commented 10 months ago

Ppl are working on a Phonemizer replacement do you want indonesian?