The special character '§' used to separate tokens when custom split chars are set was not passed to the CharacterTokenizer, causing issues when inserting a token before an existing one, as the tokenizer would not correctly recognize the end of the following token.
The special character '§' used to separate tokens when custom split chars are set was not passed to the
CharacterTokenizer
, causing issues when inserting a token before an existing one, as the tokenizer would not correctly recognize the end of the following token.