dmitry-brazhenko / SharpToken

SharpToken is a C# library for tokenizing natural language text. It's based on the tiktoken Python library and designed to be fast and accurate.
https://www.nuget.org/packages/SharpToken
MIT License
207 stars 14 forks source link

Incorrect token count with Cyrillic #7

Closed vermorel closed 1 year ago

vermorel commented 1 year ago

The online OpenAI tokenizer https://platform.openai.com/tokenizer counts 549 tokens for the piece of text below:

В цепочках поставок кейс-стадии, когда называются одна или несколько сторон, страдают от серьезных конфликтов интересов. Компании и их поддерживающие поставщики (программное обеспечение, консалтинг) имеют заинтересованность в представлении результата в положительном свете. Кроме того, фактические цепочки поставок обычно получают пользу или пострадают от случайных условий, которые никак не связаны с качеством их исполнения. Персонажи цепочки поставок - это методологический ответ на эти проблемы.

However, SharpTokens counts 219 tokens. There is something wrong going on.

dmitry-brazhenko commented 1 year ago

Hello @vermorel !

Thanks for reaching out!!

I checked with original tiktoken library, here is it's result: 2023-06-25_12-17-15

As far as I see, for model r50k_base, p50k_base, p50k_edit Number of tokens is 549, for model cl100k_base number of tokens is 219.

I double checked in the original tiktoken source code, "gpt-35-turbo" is mapped to "cl100k_base" model. Here is a proof: https://github.com/openai/tiktoken/blob/5d970c1100d3210b42497203d6b5c1e30cfda6cb/tiktoken/model.py#L10

So I would say that on the website that you shared. it uses model "..50k.." for "gpt3".

So I would say that SharpToken works correctly :)

Btw, I added your example to the testplan: https://github.com/dmitry-brazhenko/SharpToken/blob/main/SharpToken.Tests/data/TestPlans.txt

Please let me know if I am wrong :)

vermorel commented 1 year ago

Thank you very much!

Looking at the TikToken code, I finally understand what is going on. The encoding depends on the mode, not just on the model. I am using the text mode (aka completion), and the encoding is p50k_base. If I were to use the chat mode instead, the encoding would be cl100k_base.

Azure OpenAI offers gpt-35-turbo in text mode (while seemingly OpenAI does not, offering only chat), this case is not listed in the TikToken code, but it stands to reason that it must also be p50k_base.