First of all, I wanted to thank you for creating this project, it's been a huge help.
I noticed that the promptTokensEstimate() function doesn't take into account the function_call param to openai.chat.completions.create(). This means that the token count is:
correct if function_call is 'auto' or undefined
under by 1 when function_call is 'none'
under by 4 + stringTokens(function_call.name) when function_call is an object
It would be nice if openai-chat-tokens supported this natively.
In the mean time, here's a wrapper function I'm using:
import type { OpenAI } from 'openai'
import { promptTokensEstimate, stringTokens } from 'openai-chat-tokens'
export function calculateTokenCountForChat({
messages,
functions,
function_call,
}: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming | OpenAI.Chat.ChatCompletionCreateParamsStreaming): number {
const promptTokens = promptTokensEstimate({ messages, functions })
if (function_call && function_call !== 'auto') {
const functionCallTokens = function_call === 'none' ? 1 : stringTokens(function_call.name) + 4
return promptTokens + functionCallTokens
}
return promptTokens
}
First of all, I wanted to thank you for creating this project, it's been a huge help.
I noticed that the
promptTokensEstimate()
function doesn't take into account thefunction_call
param toopenai.chat.completions.create()
. This means that the token count is:function_call
is'auto'
orundefined
1
whenfunction_call
is'none'
4 + stringTokens(function_call.name)
whenfunction_call
is an objectIt would be nice if openai-chat-tokens supported this natively.
In the mean time, here's a wrapper function I'm using: