Open pluiez opened 1 year ago
technically, you can just concatenate the two BPE files (called codes_file
in the README), and this should achieve your desired result. I've done this back in 2015 to combine Cyrillic and Latin merge operations for Russian. Two things to pay attention to:
technically, you can just concatenate the two BPE files (called
codes_file
in the README), and this should achieve your desired result. I've done this back in 2015 to combine Cyrillic and Latin merge operations for Russian. Two things to pay attention to:
- the first line of the file gives some version info. You can remove this from the 2nd file that you concatenate to the first.
- the order of the files matters, since you will get different segmentations depending on the order of merge operations.
- if there's Latin alphabet text in the Japanese file, there is a chance that the English tokenization changes in rare cases. To prevent this, you'd have to only use the first 32000 merge operations for English text.
Thank you very much!
Hi, here is the case.
Since the tokenizer is unable to handle Japanese text, I'm wondering if it's possible to extend the original BPE tokenizer trained on English corpus to tokenize Japanese. So here is my idea.
I'm not sure whether it's possible to merge the two BPE models as a new model and keep the tokenization on English unchanged. Any help would be appreciated!