simonw / ttok

Count and truncate text based on tokens
Apache License 2.0
242 stars 7 forks source link

Ability to count tokens for models other than OpenAI #8

Open simonw opened 10 months ago

simonw commented 10 months ago

Had a great tip on Discord about tokenziers - which says: https://huggingface.co/docs/tokenizers/python/latest/quicktour.html#using-a-pretrained-tokenizer

You can load any tokenizer from the Hugging Face Hub as long as a tokenizer.json file is available in the repository.

And sure enough, this seems to work:

>>> import tokenizers
>>> from tokenizers import Tokenizer
>>> tokenizer = Tokenizer.from_pretrained("TheBloke/Llama-2-70B-fp16")
Downloaded 1.76MiB in 0s
>>> tokenizer.encode("hello world")
Encoding(num_tokens=3, attributes=[ids, type_ids, tokens, offsets, attention_mask, special_tokens_mask, overflowing])
simonw commented 10 months ago

Anthropic have a tokenizer too: https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/_tokenizers.py

marcothedeveloper123 commented 4 months ago

what if you don't know the origin of the model? all you have to go by is the name of the model.

is there baked-in metadata we can read that tells us what tokenizer to use?

NightMachinery commented 1 week ago

So what exactly can we use for Claude models? E.g., Sonnet 3.5.