Closed moresearch closed 2 weeks ago
Hi @moresearch, thanks for the suggestion! It's a good one, but we are not planning to add this feature at the moment.
With that said, if you'd like to have this, please feel free to suggest a pull request that adds this to inference/
. We'd really appreciate it!
Closing this as not planned, but please feel free to re-open / tag in a PR.
Describe the feature
Adding LLM tokens to the generated inference for cost calculation is important for comparing different models.
Potential Solutions
https://github.com/AgentOps-AI/tokencost can be used to calculate prompt and completion costs for the generated inference files https://drive.google.com/drive/folders/1EnrKzGAnsb_NmZKyECGmA2DrAc8ZuJ80 and for refactoring the codebase in general.