ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
18.23k stars 1.86k forks source link

关于使用SentencePiece进行词表合并的问题 #872

Closed awmthink closed 9 months ago

awmthink commented 9 months ago

提交前必须检查以下项目

问题类型

其他问题

基础模型

LLaMA-7B

操作系统

Linux

详细描述问题

中文社区下大部分关于扩充词表的描述好像都来自本仓库,但对于merge_tokenizers.py中的实现不太理解。

如果我们使用bpe子词合并算法,从算法原理上理解,BPE的核心是合并规则,也就是merges信息。 但代码实现中,只是将新的piece扩充到pieces列表后面,并且所有的score都为0,这样的实现是否符合BPE的算法原理呢?我注意到SentencePiece中在encode阶段,并不是简单的按最长子序列进行贪婪匹配。

如果作者有一些进一步的参考资料,希望给予指导,谢谢!

依赖情况(代码类问题务必提供)

No response

运行日志或截图

No response

github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 9 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.