64bit / async-openai

Rust library for OpenAI
https://docs.rs/async-openai
MIT License
1.09k stars 161 forks source link

CreateChatCompletionRequest.max_tokens should be u32 #232

Closed MakotoE closed 2 months ago

MakotoE commented 2 months ago

CreateChatCompletionRequest.max_tokens is a Option<u16> as of 0.23.1.

The newer models such as gpt-4o has a context window of 128,000 tokens. This context window limit is the sum of input and output tokens.

I believe the max_tokens field should be Option<u32> to allow numbers as high as 128,000.

64bit commented 2 months ago

Thank you for reporting the bug, a PR is most welcome!