Add state to the Langchain::LLM::Assistant class in order to keep track of the used tokens counts throughout the whole chat interaction with the assistant instance. The following information can now be accessed at the assitant level:
assistant.total_prompt_tokens # sum of all prompted tokens
assistant.total_completion_tokens # sum of all completion tokens
assistant.total_tokens # sum of the two above
The information is basically extracted from the Langchain::LLM::BaseResponse object.
Add state to the
Langchain::LLM::Assistant
class in order to keep track of the used tokens counts throughout the whole chat interaction with the assistant instance. The following information can now be accessed at the assitant level:The information is basically extracted from the
Langchain::LLM::BaseResponse
object.