Open lukestanley opened 2 weeks ago
This would be great, actually. The main considerations are around how to manage the cost dictionary:
The big challenge is that we rely on a 3rd party cost dictionary manager from LiteLLM. We also have a function that pulls the latest dictionary from their repo and pulls updates the TOKEN_COSTS
variable.
I raised an issue to LiteLLM about this just now. I think your solution makes sense, but we'd need to figure out how to update the cost dictionary first. Let's see if LliteLLM is willing to make the change, and if not, we could potentially add a sidecar dictionary that we merge prices with in the meantime
@lukestanley LiteLLM just merged some new changes that make this easier: https://github.com/BerriAI/litellm/issues/4229#event-13190865904
I don't have a ton of capacity this week, but happy to merge if you raise a PR (looks like you wrote 90% of the code anyway). Otherwise I'll get to it when I get to it
The upstream change is interesting, but makes me a bit concerned about the scalability of the the a potential multitude of token count specific variables to check to find the applicable price rule and I wonder about the complexity needed for that. I'm curious what LiteLLM's cost calculations are like! I'll try and check it out tomorrow. If they have a cost estimation function, if they have a compatible license, I wonder if copying it directly in a somewhat automated way might make more sense. Anyhow it's late here but I'll try and look into it tomorrow.
I saw on https://ai.google.dev/pricing for the latest Gemini models they have 2 different bands of pricing rules, based on the length To support this, the pricing logic and data structure might need some changes.
Maybe something like this:
So the pricing data might need to be stored like this:
Proposed new optional properties:
max_input_tokens_short
: The maximum number of input tokens for the lower pricing band.input_cost_per_token_short
: Cost per input token when the input token count is within themax_input_tokens_short
limit.input_cost_per_token_long
: Cost per input token when the input token count exceeds themax_input_tokens_short
limit.output_cost_per_token_short
: Cost per output token when the input token count is within themax_input_tokens_short
limit.output_cost_per_token_long
: Cost per output token when the input token count exceeds themax_input_tokens_short
limit.Obviously it'd need tests and the numbers reviewing. Wouldn't be surprised if I've got at least 0 off-by-one errors! ;)
Seems like a very useful library, I didn't find the info I needed so hope this helps. What do you think? @areibman