-
联通的兄弟,在ollama的模型仓库上传一下,或者发一下ollama的modelfile
-
I have this problem when trying to use a shared vocab. The vocab contains Chinese and English characters. And both my input and target contain Chinese characters. Does anyone know how to solve this p…
-
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)…
-
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35
- Python version: 3.11.8
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.2
…
-
I have seen some files like "char_vocab_chinese.txt", so is there a chinese pretrained model?
-
(venv) G:\all_summarization\Text-Summarizer-Pytorch-Chinese>python eval.py --task=test --load_model=0155000.tar
2021-01-04 08:14:08,711 - data_util.log - INFO - log启动
data/chunked/test/test_*
2021-…
PYMAQ updated
3 years ago
-
Some Chinese Text has some English words, for example: "Apples是苹果的复数形式。". I have questions about how to tokenize the text:
1. why Chinese Bert Case sensitive, but I can't find even 'A' in vocab.txt
…
-
请问你这里的bert预训练模型用的哪里的,我用的huggingface的bert-base-chinese,不但训练效果差,而且代码也一直报vocab的索引对不上的问题。
glMa7 updated
5 months ago
-
bert-base-ner-train \
-data_dir /home/lef/Desktop/\
-output_dir /home/lef/Desktop/output\
-init_checkpoint /home/lef/Desktop/bert_chinese/bert_model.ckpt\
-bert_config_file /home/l…
-
Hi there.
I saw that the repo's code only support Engilish aligner training