-
```
english lines number: 209941
chinese lines number: 209941
字符数量:53380
Traceback (most recent call last):
File "read_utils.py", line 178, in
et = TextConverter(text=a,save_dir='models/…
-
Xlsx 类型的文件要下载什么插件呀
-
Hi,
I found that this repo is focusing ONLY on fine-tuning (with LoRA) for Chinese language. However, LLaMA was trained mostly on English-corpus, with about 30,000 vocab size which is VERY small wi…
-
![image](https://user-images.githubusercontent.com/17869361/70586880-51bc5c80-1c03-11ea-9152-06fbf9b2ad79.png)
![image](https://user-images.githubusercontent.com/17869361/70586870-4701c780-1c03-11e…
-
Hi,
Thanks for your amazing work. I want to know how to get a pretrained word vectors model(eg: cc.en.300.bin) for build_vocab function if we want to train a vocab for chinese subdataset? could you p…
-
通过chinese_L-12_H-768_A-12模型训练生成simbert模型中的vocab.txt发生了变化,词的内容和数量都不同了,新simbert模型中的vocab.txt如何生成?
-
File "C:\Users\Merjadock\PycharmProjects\pythonProject-Flat\Flat-Lattice-Transformer-master\V0\add_lattice.py", line 196, in equip_chinese_ner_with_lexicon
min_freq=lattice_min_freq,only_train_mi…
-
请帮看看如下问题
paddlehub 2.1.0
paddlenlp 2.0.7
paddlepaddle-gpu 2.1.2.post101
python3.7
https://aistudio.baidu.com/bdvgpu/user/621532/…
-
app/src/main/assets
├── frontend
│ ├── final.ort
│ ├── frontend.flags
│ ├── g2p_en
│ │ ├── README.md
│ │ ├── cmudict.dict
│ │ ├── model.fst
│ │ └── phones.sym
│ ├── le…
-
使用以下命令训练模型,目录参数请根据各自的情况修改:
cd /mnt/sda1/transdat/bert-demo/bert/
export BERT_BASE_DIR=/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12
export GLUE_DIR=/mnt/sda1/transdat/bert-demo/bert/d…