Open ajaykg opened 6 months ago
>>> import regex as re
>>> gpt2pat = re.compile(r"""'(?i:[sdmt]|ll|ve|re)|[^\r\n\p{L}\p{N}]?+\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]++[\r\n]*|\s*[\r\n]|\s+(?!\S)|\s+""" )
>>> str = r"""हहिन्दी विकिपीडिया"""
>>> print (re.findall(gpt2pat, str ))
['हह', 'िन', '्द', 'ी', ' व', 'िक', 'िप', 'ीड', 'िय', 'ा']
>>> # The above got broken at every vovel combining mark
>>> # It can be fixed by including \p{M} wherever there is \p{L}
>>> gpt2pat = re.compile(r"""'(?i:[sdmt]|ll|ve|re)|[^\r\n\p{L}\p{N}]?+[\p{L}\p{M}]+|\p{N}{1,3}| ?[^\s\p{L}\p{M}\p{N}]++[\r\n]*|\s*[\r\n]|\s+(?!\S)|\s+""" )
>>> print (re.findall(gpt2pat, str ))
['हहिन्दी', ' विकिपीडिया']
>>> The above keep it as is and correctly breaks at word boundaries
bump.
Dose this merge negatively effect anything?
Should not. Given we are telling the regular expression to not split words between a character and a combining mark after the character. The combining marks in all scripts should by definition not exist independently. Atleast for southasian languages all the vovels following a consonent are combining marks and hence it should significantly improve tokenization that is making it act almost like a character level model.
Fixing the problem that all tokenizers have with regard to all combining marks like diacritics, Indic Matras (vowels after consonants) Indic Halant, Arabic, Hebrew etc. This was probably breaking most languages except English and CJKs. Verified for Indic languages.