Open WebsheetPlugin opened 4 months ago
can you post some logs on the above issue? Thanks @WebsheetPlugin
v0.2.25 works for me:
'usage_excluding_cached_inference': {'gpt-4-turbo-2024-04-09': {'completion_tokens': 417,
'cost': 0.04359,
'prompt_tokens': 3108,
'total_tokens': 3525},
'total_cost': 0.04359},
'usage_including_cached_inference': {'gpt-4-turbo-2024-04-09': {'completion_tokens': 417,
'cost': 0.04359,
'prompt_tokens': 3108,
'total_tokens': 3525},
'total_cost': 0.04359}}
v0.2.25 works for me:
'usage_excluding_cached_inference': {'gpt-4-turbo-2024-04-09': {'completion_tokens': 417, 'cost': 0.04359, 'prompt_tokens': 3108, 'total_tokens': 3525}, 'total_cost': 0.04359}, 'usage_including_cached_inference': {'gpt-4-turbo-2024-04-09': {'completion_tokens': 417, 'cost': 0.04359, 'prompt_tokens': 3108, 'total_tokens': 3525}, 'total_cost': 0.04359}}
Because you have latest version. I would prefer an solution that not requires me to git pull latest version for every openaAi model update...
I'd like that too 😄 . How?
A) It seems that curently you are hardcoding the price for each model type seperatly. But if it's not present, then you should simple compare the prefix. So in this case, I believe the prefix "gpt-4-turbo" is enough to understand what price range it is :)
B) keep in mind that gpt-4-turbo is now automaticly pointing to that new model. Which means that all running aplication with earlier version stop calculating costs right. So it's an huge problem.
C) As an additonial idea. Which might be good for non openai models, which pricing were not added. I Propouse to allow setting prompt/completion prices via llm_settings.
czw., 18 kwi 2024, 13:48 użytkownik Chi Wang @.***> napisał:
I'd like that too 😄 . How?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/autogen/issues/2424#issuecomment-2063675702, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUEKDGVSNZ2XGCPH3AXL6W3Y56XITAVCNFSM6AAAAABGMG7QLOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRTGY3TKNZQGI . You are receiving this because you were mentioned.Message ID: @.***>
The issues in A) and B) are true, though I don't have a good solution. The solution in C) makes sense by having the price provided by the user.
Do you think simply checking if "gpt-4-turbo-2024-04-09" starts with "gpt-4-turbo" is not good? I think it's better than calculating the price "0" :)
If you don't like the solution proposed above, then at least let's do C) so I can enter the prices manually and not worry about new model updates.
Do you think simply checking if "gpt-4-turbo-2024-04-09" starts with "gpt-4-turbo" is not good? I think it's better than calculating the price "0" :)
If you don't like the solution proposed above, then at least let's do C) so I can enter the prices manually and not worry about new model updates.
I think it's still a one-time solution and doesn't address the dynamic nature of the problem. C) sounds good to me. Feel free to make a PR.
Describe the bug
Pricing is missing again for the newest model.
The current way of handling costs is not optimal. It should be possible to determine that the new model is a gpt-4-turbo model as well, as the prefix is the same.
Steps to reproduce
Use gather_usage_summary([Agent])
from autogen.agentchat.utils import gather_usage_summary
with: [ { "model": "gpt-4-turbo", "api_key": "sk-xxx", "max_tokens": 4000 } ]
Model Used
gpt-4-turbo-2024-04-09, gpt-4-turbo
Expected Behavior
calculate cost for gpt-4-turbo as per https://openai.com/pricing
Screenshots and logs
No response
Additional Information
No response