Closed CLRafaelR closed 1 year ago
🤖
Hello @CLRafaelR,
I'm Dosu, an AI bot here to assist you with your needs related to the LangChain repository. I can help you debug issues, answer your questions, and provide guidance on how to contribute to our project. While you're waiting for a human maintainer, I'm here to provide initial support.
I'm currently reviewing your issue and will get back to you with a more complete response shortly.
🤖
Thank you for bringing this to our attention. It seems like the get_openai_callback()
function is not currently set up to handle the newly introduced models gpt-3.5-turbo-1106
and gpt-4-1106-preview
.
The get_openai_callback()
function calculates the cost based on the MODEL_COST_PER_1K_TOKENS
dictionary, which currently does not include the new models. This is why the cost is not being displayed correctly.
To resolve this issue, you would need to add the new models to the MODEL_COST_PER_1K_TOKENS
dictionary in the openai_info.py
file with their respective costs per 1,000 tokens. Here's how you can do it:
MODEL_COST_PER_1K_TOKENS = {
...
"gpt-3.5-turbo-1106": <cost>,
"gpt-3.5-turbo-1106-completion": <cost>,
"gpt-4-1106-preview": <cost>,
"gpt-4-1106-preview-completion": <cost>,
...
}
Please replace <cost>
with the actual cost per 1,000 tokens for each model. Once you've done this, the get_openai_callback()
function should be able to calculate the total cost for these models correctly.
I hope this helps! If you have any other questions or run into any issues, please let me know.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
On the topic of this I think the current way of reporting 0 for unknown models is not ideal. Especially since somebody might falsely be relying on the cost data and then be billed more than they expect. Instead i think that if the cost data cannot be calculated the callback should report an error instead of reporting back 0$.
@IsakNyberg I totally agree to you. An error message should appear.
Can we also update the pricing information for the latest OpenAI models (released 0125)?
When we employ the new models introduced during DevDay, gpt-3.5-turbo-1106 and gpt-4-1106-preview, get_openai_callback() does not accurately display their total cost, which includes both the prompt and completion token cost. The subsequent minimal working example illustrates this issue. The costs should not be $0.0, but rather $0.01 30 + $0.03 598 = $18.24 for gpt-4-1106-preview and $0.0010 30 + $0.0020 160 = $0.35 for gpt-3.5-turbo-1106.
MWE
Output of MWE
It is necessary to include the cost per 1,000 tokens in
MODEL_COST_PER_1K_TOKENS
which is defined in the following lines, in accordance with OpenAI's official pricing page (here for gpt-4-turbo; here for gpt-3.5-turbo).https://github.com/langchain-ai/langchain/blob/ff87f4b4f90c1d13ddb79120c6ded6c0af2959b7/libs/langchain/langchain/callbacks/openai_info.py#L7C1-L35