-
@xcmyz I tried your laest code, the acoustic quality improve much, early the same as tacotron2 I think.
The TTS corpus I use is chinese, and I keep the default hparams setting.
My loss seems not as …
-
Before we run fairseq-preprocess, do we need to do some preprocessing on the Chinese corpus for supNMT? For example, tokenize the Chinese corpus with jieba, then run "learnbpe" and "applybpe" with fa…
-
![image](https://user-images.githubusercontent.com/50871412/119260850-4f876b80-bc07-11eb-8894-124302600643.png)
![image](https://user-images.githubusercontent.com/50871412/119260875-675eef80-bc07-11e…
-
I try to apply mtmsn in DRCD chinese corpus, and find out that "bert.tokenization.FullTokenizer" can't handle the chinese word tokenization. Is that why I can't use mtmsn in Chinese corpus ?
But I …
-
Now i don't want to use WFST decoder for Chinese ASR, is it possible to use RNN LM for decoding directly ? do you have any solutions ?Looking forward to your reply.Best wishes !!!
-
Hi,
I am very interested in how MetaPAD works, and I am interested in how it works on Chinese corpus. However, it seems I can't find some files when I changed them into Chinese corpus, so I need yo…
-
Traceback (most recent call last):
File "···/chinese_sentiment-master/data/hotel_comment/raw_data/fix_coupus.py", line 30, in
fix_corpus(POS, FIX_POS)
File "···/chinese_sentiment-master/da…
-
I would like to extend support for more languages. To better understand the current state, I have a few questions about the tokenizer:
**1. What kind of corpus or datasets were used to build the cu…
-
Mostly the pictures in the fine-tuned corpus have a clear focus on something, such as a character, a puppy, etc. But the picture content is more mixed cases, such as publicity posters, on which there …
-
Hi! Thanks for your contribution. It is an excellent piece of work!
Your idea is great, and I want to test my task. But my corpus language is Chinese, do I need to adjust the tokenizer and pre-trai…
yihp updated
2 months ago