Cainier / gpt-tokens

Calculate the token consumption and amount of openai gpt message
MIT License
104 stars 13 forks source link

Support `gpt-4-1106-preview` model #32

Closed sebastiansandqvist closed 10 months ago

sebastiansandqvist commented 10 months ago

This PR aims to begin the work needed to support the new gpt-4-1106-preview GPT-4 Turbo preview model. The PR in this current state is incomplete since it will first require the js-tiktoken dependency to support that model. That work within js-tiktoken is underway and can be tracked here: https://github.com/dqbd/tiktoken/pull/79

Once that is complete, then upgrading the dependency should resolve the type error in getEncodingForModelCached which currently fails due to gpt-4-1106-preview not being a supported model type.

I was unsure about where to find the values for tokens_per_message and tokens_per_name, so I left those the same as their GPT-4 counterparts.

Cainier commented 10 months ago

Thank you, I've merged your modification into main and updated the v1.1.3 version based on it