-
And can this model be helpful on Chinese dataset?
-
### Version
1
### DataCap Applicant
dos2un1x
### Project ID
1
### Data Owner Name
LAMOST
### Data Owner Country/Region
United States
### Data Owner Industry
Life Science / Healthcare
### W…
-
is that able to train on Chinese dataset?
-
@cshanbo @lukaszkaiser
Hi, I've read all your discussion in #111 but I don't know your tests' result on segmented Chinese dataset and non-segmented Chinese dataset. I'm using t2t on en-zh translatio…
-
Model: Qwen-14B-Chat (QWen2)
Dataset: https://huggingface.co/datasets/Hello-SimpleAI/HC3-Chinese/blob/main/open_qa.jsonl
Environment: 2 A30 GPU
Issue 1:
Error: can't init model correctly. Disab…
-
I learnt the LightSpeech module to compelete TTS mission in English. But I want to test the TTS and synthesize in Chinese. Can you give me some instructions or advice?
I found the current dataset is …
-
I used your model. The experiment used the open source biaobei dataset and LJspeech dataset. It synthesized 22000 steps and successfully synthesized Chinese and English mixed speech, but the Chinese a…
-
想问下中文效果怎么实现的,像hallo这些用英文数据集,中文demo好像不太理想
-
Hello,
I try to use train.py from your suggested porosity below
https://github.com/GitYCC/deep-text-recognition-benchmark
to train the "TCSynth dataset" of traditional Chinese character test im…
-
Hello,
I would like to request the addition of the MAP-Neo model to your repository. MAP-Neo is the first high-performance, fully open-source bilingual (Chinese and English) LLM. This model include…