Cainier / gpt-tokens

Calculate the token consumption and amount of openai gpt message
MIT License
106 stars 13 forks source link

Support `gpt-4-1106-preview` model #32

Closed sebastiansandqvist closed 1 year ago

sebastiansandqvist commented 1 year ago

This PR aims to begin the work needed to support the new gpt-4-1106-preview GPT-4 Turbo preview model. The PR in this current state is incomplete since it will first require the js-tiktoken dependency to support that model. That work within js-tiktoken is underway and can be tracked here: https://github.com/dqbd/tiktoken/pull/79

Once that is complete, then upgrading the dependency should resolve the type error in getEncodingForModelCached which currently fails due to gpt-4-1106-preview not being a supported model type.

I was unsure about where to find the values for tokens_per_message and tokens_per_name, so I left those the same as their GPT-4 counterparts.

Cainier commented 1 year ago

Thank you, I've merged your modification into main and updated the v1.1.3 version based on it