Open Char15Xu opened 1 week ago
Good fix @Char15Xu. How do we calculate the cost for anthropic models?
@daiwaid Do you think this bug impact our existing results (e.g., the graphs on blog)?
@daiwaid Do you think this bug impact our existing results (e.g., the graphs on blog)?
No, all benchmark tests explicitly pass the currently configured model name into the get_cost()
function.
Good fix @Char15Xu. How do we calculate the cost for anthropic models?
The same code is used for anthropic models
Good fix @Char15Xu. How do we calculate the cost for anthropic models?
The same code is used for anthropic models
I see. @Char15Xu but do we have the cost cache for the anthropic models?
Good fix @Char15Xu. How do we calculate the cost for anthropic models?
The same code is used for anthropic models
I see. @Char15Xu but do we have the cost cache for the anthropic models?
Sorry, I missed this message. When doing benchmark, anthropic models also use the libem.optimize.cost.openai module. There is not a separate cost cache for anthropic models.
Good fix @Char15Xu. How do we calculate the cost for anthropic models?
The same code is used for anthropic models
I see. @Char15Xu but do we have the cost cache for the anthropic models?
Sorry, I missed this message. When doing benchmark, anthropic models also use the libem.optimize.cost.openai module. There is not a separate cost cache for anthropic models.
Should there be one? If so, we should do a future PR for this.
What this PR does / why we need it:
This PR updates
model_info
when the model choice changes during matching. This ensures that the correctmodel_info
is used for retrieving pricing.Which issue(s) this PR addresses (if any):
Addresses # #### Special notes for your reviewer: #### Does this PR introduce a user-facing change?