Our regex already has a special case to leave Chinese and Japanese alone
when an appropriate tokenizer for the language isn't being used, as
Unicode's default segmentation would make every character into its own
token.
The same thing happens in Thai, and we don't even have an appropriate
tokenizer for Thai, so I've added a similar fallback.
Our regex already has a special case to leave Chinese and Japanese alone when an appropriate tokenizer for the language isn't being used, as Unicode's default segmentation would make every character into its own token.
The same thing happens in Thai, and we don't even have an appropriate tokenizer for Thai, so I've added a similar fallback.