huggingface / tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
https://huggingface.co/docs/tokenizers
Apache License 2.0
8.92k stars 776 forks source link

Loading `tokenizer.model` with Rust API #1518

Closed EricLBuehler closed 2 months ago

EricLBuehler commented 5 months ago

Hello all,

Thank you for your excellent work here. I am trying to load a tokenizer.model file in my Rust application. However, it seems that the Tokenizer::from_file function only support loading from a tokenizer.json file. This causes problems as using a small script to save the tokenizer.json is error-prone and hard to discover for users. Is there a way to load a tokenizer.model file?

ArthurZucker commented 5 months ago

You cannot load a tokenizer.model, you need to write a converter. This is because it does not come from the tokenizers library but from either tiktoken or sentencepiece and there is no secret recipe. We need to adapt to the content of the file, but this is not super straight forward.

https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py#L544 is the simplest way to understand the process!

EricLBuehler commented 5 months ago

Ok, I understand. Do you know of a way or a library to do this in Rust without reaching for the Python transformers converter?

ArthurZucker commented 5 months ago

A library no, but we should be able to come up with a small rust code to do this 😉

EricLBuehler commented 4 months ago

@ArthurZucker are there any specifications or example loaders which I can look at to implement this?

chenwanqq commented 4 months ago

I also have the same question, for llava reasons😉

ArthurZucker commented 3 months ago

Yes! Actually the best way to do this is to use the converters from transformers see here: https://github.com/huggingface/transformers/blob/2965b204593df9d5652313386ec280ffbfd1753b/src/transformers/convert_slow_tokenizer.py#L1340 .

In rust we would need to read and parse the .model file with a sentencepiece loader.

EricLBuehler commented 3 months ago

Ok. Could I use this crate?

One other question: I am implementing GGUF to HF tokenizers conversion in mistral.rs, and have had success with the unigram model. I am adding the gpt2 = bpe model, but I was wondering what components of the Tokenizer are required such as the normalizer, post processor, etc., and also what decoder to use?

This is what I currently do: https://github.com/EricLBuehler/mistral.rs/blob/d66e5aff1e7faf208469c5bef3c70d45ffda5401/mistralrs-core/src/pipeline/gguf_tokenizer.rs#L116-L142, I would appreciate it if you could take a quick look and see if there is anything obviously wrong!

vody-am commented 3 months ago

oh I also have an interest in reading sentence piece tokenizers as well, in order to invoke the SigLIP text transformer in Rust!

EDIT: using the library mentioned by Eric above, I was able to load up https://huggingface.co/google/siglip-so400m-patch14-384/blob/main/spiece.model and it seemingly tokenized my input!

ArthurZucker commented 3 months ago

@EricLBuehler we actually shipped this in transformers, but sure I can have a look. Most of the tokenizers that are supported in GGUF format should use Metaspace per-tokenizer and decoder, BPE or unigram model, and either not normalizer or a precompile charset map. All the requirement are in the [convers_slow](https://github.com/huggingface/transformers/blob/8685b3c5d2dd2550527773d2a02499495a759e31/src/transformers/convert_slow_tokenizer.py#L56) in transformers.

I'll think about potentially automatically convert sentencepiece.model to rust, but the big problem is that I don't want to have to support sentencepiece + tiktoken, so might just be example gists / snippets of how to do this!

EricLBuehler commented 3 months ago

Thank you, @ArthurZucker for the link! I was actually able to get the GPT2 conversion to work now!

github-actions[bot] commented 2 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.