hmarr / openai-chat-tokens

💬 Estimate the number of tokens an OpenAI chat completion will use
MIT License
89 stars 12 forks source link

Account for token overhead of the `function_call` param #9

Closed b0o closed 1 year ago

b0o commented 1 year ago

First of all, I wanted to thank you for creating this project, it's been a huge help.

I noticed that the promptTokensEstimate() function doesn't take into account the function_call param to openai.chat.completions.create(). This means that the token count is:

It would be nice if openai-chat-tokens supported this natively.

In the mean time, here's a wrapper function I'm using:

import type { OpenAI } from 'openai'
import { promptTokensEstimate, stringTokens } from 'openai-chat-tokens'

export function calculateTokenCountForChat({
  messages,
  functions,
  function_call,
}: OpenAI.Chat.ChatCompletionCreateParamsNonStreaming | OpenAI.Chat.ChatCompletionCreateParamsStreaming): number {
  const promptTokens = promptTokensEstimate({ messages, functions })
  if (function_call && function_call !== 'auto') {
    const functionCallTokens = function_call === 'none' ? 1 : stringTokens(function_call.name) + 4
    return promptTokens + functionCallTokens
  }
  return promptTokens
}
hmarr commented 1 year ago

Thank you for the detailed issue report and PR @b0o! 🙌

This will go out in the next release (v0.2.7) which I'll get published shortly.