I use functions in addition to a custom prompt, and a history of messages, to my chatCompletion calls.
I need to calculate the amount of tokens in my calls, to avoid hitting the token limit of the model.
I know how to calculate the amount of tokens for my prompt, but how do I to do it for the functions? Should I just tokenize the JSON describing the functions?
I use functions in addition to a custom prompt, and a history of messages, to my
chatCompletion
calls.I need to calculate the amount of tokens in my calls, to avoid hitting the token limit of the model.
I know how to calculate the amount of tokens for my prompt, but how do I to do it for the functions? Should I just tokenize the JSON describing the functions?