Confirm this is a feature request for the Node library and not the underlying OpenAI API.
[X] This is a feature request for the Node library
Describe the feature or improvement you're requesting
Hi, I belive that max_prompt_tokens and max_completion_tokens are essential features for using the whole Assistants API. Without being able to specify the max tokens, the thread is just getting longer, and running the AI model getting very expensive. Please add the max_prompt_tokens and max_completion_tokens parameters to the openai.beta.threads.runs.create. Thanks!
Confirm this is a feature request for the Node library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
Hi, I belive that
max_prompt_tokens
andmax_completion_tokens
are essential features for using the whole Assistants API. Without being able to specify the max tokens, the thread is just getting longer, and running the AI model getting very expensive. Please add themax_prompt_tokens
andmax_completion_tokens
parameters to theopenai.beta.threads.runs.create
. Thanks!Additional context
No response