karpathy / minbpe

Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
MIT License
9.19k stars 866 forks source link

LLM is worse at non-English languages #92

Open 7CD opened 1 week ago

7CD commented 1 week ago

Andrej in his YouTube video noted that LLMs are worse at non-English languages, partly due to tokenization. Basically, for less represented languages, even frequent pairs of characters appear less frequently in the corpus than most of English pairs. Hence, fewer merges occur in BPE for these languages and their token representation ends up being lengthy. Isn’t it a good idea to build tokens for each language separately and for distinct domains e.g., python code?