Closed toniengelhardt closed 8 months ago
This also breaks the completion_cost
function.
I believe you're just missing the required flag
count_response_tokens
def test_token_counter():
from litellm import token_counter
model = 'gpt-3.5-turbo' # 'gpt-4-1106-preview'
text = 'Hello, this is a text with roughly 11 tokens.'
count = token_counter(model=model, text=text, count_response_tokens=True)
print(count)
raise Exception("it worked!")
also added this scenario in our token_counter logic
should be fixed in v1.22.4
What happened?
This code returns
3
, when it should be ~11.It used to work, but broke with one of the last updates.
Relevant log output
No response
Twitter / LinkedIn details
No response