💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
Modify JiebaTokenizer.tokenize to correctly process mixed Chinese and English texts. This adjustment is aimed at resolving the issue where additional white space tokens were generated between Chinese and English tokens, which previously led to dimension mismatch errors during model training.
Detailed explanation:
Updated the tokenize method to skip whitespace tokens and accurately find each token's position using the text itself rather than relying solely on Jieba's output. This change helps prevent the dimension mismatch error and is a response to the issues reported in Jira as OSS-275 and OSS-539.
Benefits:
Prevents training errors when training data contains mixed Chinese and English languages.
Improves the robustness of tokenization in mixed language scenarios.
Proposed changes:
JiebaTokenizer.tokenize
to correctly process mixed Chinese and English texts. This adjustment is aimed at resolving the issue where additional white space tokens were generated between Chinese and English tokens, which previously led to dimension mismatch errors during model training.Detailed explanation:
Benefits:
Related Jira Issues:
Status (please check what you already did):
black
(please check Readme for instructions)Additional notes: