Open guiramos opened 9 months ago
I think this is a good idea. Cost should be an attribute of Core
rather than the TUI.
The changes would be straight-forward:
interpreter/terminal_interface/utils/count_tokens.py
to interpreter/core/utils/
interpreter/terminal_interface/magic_commands.py
importsAs @Notnaton pointed out, we can remove the dependency on tiktoken as well.
LiteLLM supports encoding text to allow cost_per_token calculations: https://docs.litellm.ai/docs/completion/token_usage
@KillianLucas do you support this change?
Also, if possible, the OpenInterpreter is defaulting to "gpt-4" which is the most expensive model. Can we provide tha as an argument?
@Arrendy @KillianLucas
We have cost limit already In llm.py?:
Self.max_budget = None
@Notnaton , cost limit is one different than how much it did cost the last interaction with the chat. Programatically, this is basically unreacheable, after you call interpreter.chat()
Interpreter.llm.max_budget should be reachable...
I will have a look at it when I come home in an hour. If I'm misunderstanding something please clarify.
The Interpreter.llm.max_budget
is reachable, what is not reachable is the cost of the last interaction with the chat.
We can see in the logs a line like this:
final cost: 0.036570000000000005; prompt_tokens_cost_usd_dollar: 0.03579; completion_tokens_cost_usd_dollar: 0.00078
But you cant get access to that data outside the 'interpreter' instance.
Is your feature request related to a problem? Please describe.
No response
Describe the solution you'd like
When using open-interpreter as part of my python project, I'd like to have access to the cost of each interaction of the llm.
So, after each call to
interpreter.chat
there should be a way to extract the cost. The payload of the cost is visible in the logs and described in the 'Additional context' field.Describe alternatives you've considered
No response
Additional context
When verbose = True we can see in the logs:
I think this is coming from litellm.