Open Blaizzy opened 4 months ago
Hi @Blaizzy . Perhaps this issue could be an opportunity to implement LLM tracking using AgentOps [https://github.com/AgentOps-AI/agentops] (for example). Or do you see this as a step for the future, or is that not the right approach? What are your thoughts?
Description:
We'd like to add a simple token usage tracking feature to our FastMLX application. This will help users understand how many tokens their requests are consuming.
Objective:
Implement a function that counts the number of tokens in the input and output of our AI models.
Tasks:
count_tokens(text: str) -> int
in theutils.py
file.main.py
.Example Implementation:
Guidelines:
Resources:
Definition of Done:
We're excited to see your contribution! This feature will help our users better understand and manage their token usage. Good luck!