Closed vblagoje closed 1 month ago
This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
components/classifiers/zero_shot_document_classifier.py | 3 | 91.07% | ||
utils/filters.py | 3 | 96.91% | ||
components/evaluators/llm_evaluator.py | 5 | 95.08% | ||
<!-- | Total: | 11 | --> |
Totals | |
---|---|
Change from base Build 10899485113: | 0.09% |
Covered Lines: | 7338 |
Relevant Lines: | 8121 |
Please don't review this PR unless you are @anakin87
Why:
Adds token
usage
metadata to responses from HuggingFaceAPIChatGenerator.usage
dictionary response meta field has the following two keysprompt_tokens
andcompletion_tokens
matching OpenAI format in token counting.This feature, i.e. OpenAI token usage format compatibility, aside from chat generators interchangeability benefits, is needed for full support of Langfuse GENERATION token usage renderings in traces. See https://github.com/deepset-ai/haystack-private/issues/82 for more details.
What:
usage
meta field with the keysprompt_tokens
andcompletion_tokens
toHuggingFaceAPIChatGenerator
.usage
information in the message metadata.How can it be used:
How did you test it:
usage
meta field and its containedprompt_tokens
andcompletion_tokens
keys in the reply messages.Notes for the reviewer: