karpathy / minbpe

Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
MIT License
9.08k stars 839 forks source link

Update regex.py to correctly parse scripts with combining marks #71

Open ajaykg opened 4 months ago

ajaykg commented 4 months ago

Fixing the problem that all tokenizers have with regard to all combining marks like diacritics, Indic Matras (vowels after consonants) Indic Halant, Arabic, Hebrew etc. This was probably breaking most languages except English and CJKs. Verified for Indic languages.

ajaykg commented 4 months ago
>>> import regex as re
>>> gpt2pat = re.compile(r"""'(?i:[sdmt]|ll|ve|re)|[^\r\n\p{L}\p{N}]?+\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]++[\r\n]*|\s*[\r\n]|\s+(?!\S)|\s+""" )
>>> str = r"""हहिन्दी विकिपीडिया"""
>>> print (re.findall(gpt2pat, str ))
['हह', 'िन', '्द', 'ी', ' व', 'िक', 'िप', 'ीड', 'िय', 'ा']
>>> # The above got broken at every vovel combining mark
>>> # It can be fixed by including \p{M} wherever there is \p{L}
>>> gpt2pat = re.compile(r"""'(?i:[sdmt]|ll|ve|re)|[^\r\n\p{L}\p{N}]?+[\p{L}\p{M}]+|\p{N}{1,3}| ?[^\s\p{L}\p{M}\p{N}]++[\r\n]*|\s*[\r\n]|\s+(?!\S)|\s+""" )
>>> print (re.findall(gpt2pat, str ))
['हहिन्दी', ' विकिपीडिया']
>>> The above keep it as is and correctly breaks at word boundaries 
ajaykg commented 4 months ago

https://github.com/karpathy/minbpe/issues/73

ajaykg commented 4 months ago

bump.

dustinwloring1988 commented 3 months ago

Dose this merge negatively effect anything?

ajaykg commented 3 months ago

Should not. Given we are telling the regular expression to not split words between a character and a combining mark after the character. The combining marks in all scripts should by definition not exist independently. Atleast for southasian languages all the vovels following a consonent are combining marks and hence it should significantly improve tokenization that is making it act almost like a character level model.