ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
17.98k stars 1.84k forks source link

词表扩充 + 增量训练前 如何引入更多 special tokens #864

Closed ZeyuTeng96 closed 8 months ago

ZeyuTeng96 commented 8 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

LLaMA-7B

操作系统

Windows

详细描述问题

# 请在此处粘贴运行代码(如没有可删除该代码块)

在使用Llama2进行词表扩充 + 中文数据增量预训练时,想在预训练前引入更多的special token (比如:<|User|>, <|Assistant|>,<|System|>等)。是通过何种方式引入呢?

是在合并完词表,保存chinese_llama.model并重新加载tokenizer后 - 50行后 52行前(https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/scripts/merge_tokenizer/merge_tokenizers.py#L50),通过: tokenizer.add_special_tokens({'additional_special_tokens': ['<|User|>', '<|Assistant|>', '<|System|>']}) 这个方法引入嘛?

还是在训练chinese_sp_model时(https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/scripts/merge_tokenizer/merge_tokenizers.py#L18),通过某种方法引入呢? 尝试过通过如下方式训练chinese_sp_model,但是并不能将<|User|>, <|Assistant|>, <|System|>作为special token spm.SentencePieceTrainer.train(input='./wiki.txt', model_prefix='wiki', model_type='bpe', vocab_size=20000, byte_fallback=True, user_defined_symbols= ['<|User|>', '<|Assistant|>', '<|System|>'])

还是其他方式呢?

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况

运行日志或截图

# 请在此处粘贴运行日志
github-actions[bot] commented 8 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 8 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.