huggingface / tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
https://huggingface.co/docs/tokenizers
Apache License 2.0
9.05k stars 803 forks source link

Allow users to select/write encoding strategies #1655

Open pietrolesci opened 1 month ago

pietrolesci commented 1 month ago

Hi there,

Do you plan to add the possibility to control how tokenizers behave at inference time?

For example, adding the possibility for the user to decide whether to use standard BPE (merges) or, e.g., the longest prefix encoding strategy. See Greed is All You Need: An Evaluation of Tokenizer Inference Methods for why this can be useful.

Thanks in advance for your time!

Best, Pietro


Example. Consider a BPE tokenizer with merges M = {yu, yum, my} and initial alphabet A = {y, u, m}. Given the string s = yummy, the standard BPE merge-based strategy tokenizes s as yu | m | my while BPE with the longest prefix encoding strategy tokenizes s as yum | my.

ArthurZucker commented 1 month ago

Hey! If it is demanded by the community for sure! 🤗 I think it would be still quite hard to make it super efficient (changing would take some time)

pietrolesci commented 1 month ago

Hi @ArthurZucker, Thanks a lot for your swift reply! I think it will be super useful, especially for research purposes. Perhaps, the simplest thing would be to allow BPE tokenizers to behave like WordPiece at inference time. In the same way users can assign, e.g., pre_tokenizers to a tokenizer class, they could in principle be able to pass a, e.g., predictor too. What do you think?