Closed vermorel closed 1 year ago
Hello @vermorel !
Thanks for reaching out!!
I checked with original tiktoken library, here is it's result:
As far as I see, for model r50k_base, p50k_base, p50k_edit Number of tokens is 549, for model cl100k_base number of tokens is 219.
I double checked in the original tiktoken source code, "gpt-35-turbo" is mapped to "cl100k_base" model. Here is a proof: https://github.com/openai/tiktoken/blob/5d970c1100d3210b42497203d6b5c1e30cfda6cb/tiktoken/model.py#L10
So I would say that on the website that you shared. it uses model "..50k.." for "gpt3".
So I would say that SharpToken works correctly :)
Btw, I added your example to the testplan: https://github.com/dmitry-brazhenko/SharpToken/blob/main/SharpToken.Tests/data/TestPlans.txt
Please let me know if I am wrong :)
Thank you very much!
Looking at the TikToken code, I finally understand what is going on. The encoding depends on the mode, not just on the model. I am using the text
mode (aka completion), and the encoding is p50k_base
. If I were to use the chat
mode instead, the encoding would be cl100k_base
.
Azure OpenAI offers gpt-35-turbo
in text
mode (while seemingly OpenAI does not, offering only chat
), this case is not listed in the TikToken code, but it stands to reason that it must also be p50k_base
.
The online OpenAI tokenizer https://platform.openai.com/tokenizer counts 549 tokens for the piece of text below:
However,
SharpTokens
counts 219 tokens. There is something wrong going on.