Closed Blarc closed 1 year ago
Sounds reasonable 👍 I would not want to add the temporary snapshot versions of GPT3 and 4 to the enum to prevent having to update the library every three months when they deprecate. But for the rest of the models, we can totally add the max tokens. The only ones that seem to miss a documented token limit are davinci-codex
, cushman-codex
, text-davinci-001
, text-davinci-edit-001
and code-davinci-edit-001
.
I will look into those tomorrow
The change is included in the 0.3.0 release
It should soon be available on maven central
Thank you, for your suggestion 🙂
Would it be possible to add max tokens property to
ModelType
enum, that specifies the maximum number of tokens that the model can consume?With this property added, one could add a method for checking if prompt is too big for a given model.
I've extracted max tokens for models from OpenAI's page:
I am not sure what are the limitations for other models though...