Closed areibman closed 1 week ago
we can be more specific here @areibman - since it's not standard yet what 'long' and 'short' mean.
This seems similar to how tgai pricing works based on token params - what if we do input_cost_per_token_up_to_128k
and input_cost_per_token_above_128k
?
we can be more specific here @areibman - since it's not standard yet what 'long' and 'short' mean.
This seems similar to how tgai pricing works based on token params - what if we do
input_cost_per_token_up_to_128k
andinput_cost_per_token_above_128k
?
That would probably work! Only precaution I can think of is if some providers start providing multiple tier pricing per model I.e. <128k, 128k - 256k, 256k - 512k etc.
This should be as easy as updating the proxies json, no?
The Feature
Some models price tokens differently based on the length of the prompt. It would be helpful to potentially restructure or add fields to the model price dictionary to account for this.
This could look something like:
Motivation, pitch
Google models price tokens differently for prompts >128k tokens. According to https://ai.google.dev/pricing:
This was brought up in https://github.com/AgentOps-AI/tokencost/issues/53 which relies on the LiteLLM cost tracker
Twitter / LinkedIn details
https://www.twitter.com/alexreibman