Closed AndreCNF closed 5 months ago
Thanks for raising this! I’m not able to run and test for another day or so, but can you please also run the unit tests to account for the num tokens change? I believe these are breaking changes
Thanks for raising this! I’m not able to run and test for another day or so, but can you please also run the unit tests to account for the num tokens change? I believe these are breaking changes
I've made some extra changes so that the unit tests pass again ✅
Thanks for raising this! I’m not able to run and test for another day or so, but can you please also run the unit tests to account for the num tokens change? I believe these are breaking changes
I've made some extra changes so that the unit tests pass again ✅
Thanks for this! Overall, I think the LlamaIndex stuff should work, but I'd prefer not to merge the new way of returning token counts + cost in the single function. We already have token counting functionality as separate functions, so there's no need to overload what we already have.
The main issue is that these changes will break every other users' installations of tokencost because the cost estimators will return tuples instead of decimals.
One possible alternative is creating a function called "calculate_everything" (or something like that) that returns a dict/tuple of all the information.
Thanks for raising this! I’m not able to run and test for another day or so, but can you please also run the unit tests to account for the num tokens change? I believe these are breaking changes
I've made some extra changes so that the unit tests pass again ✅
Thanks for this! Overall, I think the LlamaIndex stuff should work, but I'd prefer not to merge the new way of returning token counts + cost in the single function. We already have token counting functionality as separate functions, so there's no need to overload what we already have.
The main issue is that these changes will break every other users' installations of tokencost because the cost estimators will return tuples instead of decimals.
One possible alternative is creating a function called "calculate_everything" (or something like that) that returns a dict/tuple of all the information.
My latest commit reverts back the breaking changes and implements a calculate_all_costs_and_tokens
that is used in the Llama Index callback.
Let me know if there are still any issues to resolve.
Chanmpion. Thank you so much @AndreCNF !
This pull request fixes the Llama Index imports in the code and adds functionality to track costs and token counts in the
TokenCostHandler
class. TheTokenCostHandler
class now includes variables to store the prompt cost, completion cost, prompt tokens, and completion tokens. Thecalculate_prompt_cost
andcalculate_completion_cost
functions have been updated to return the cost and number of tokens. Additionally, thereset_counts
method has been added to reset the counts (e.g. at the end of each query of an agent, similarly to what is done with Llama Index's original token counting callback).