deepset-ai / haystack

:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
https://haystack.deepset.ai
Apache License 2.0
14.55k stars 1.71k forks source link

LLM-based Evaluators should return `meta` information provided by OpenAIGenerator #7905

Closed bilgeyucel closed 5 days ago

bilgeyucel commented 2 weeks ago

Is your feature request related to a problem? Please describe. To calculate the cost of the evaluation, we need a token count. Most generator components in Haystack provide that information in the meta field.

Describe the solution you'd like LLM-based evaluators should return token count information with the result. This would be handy for connecting evaluation pipelines to monitoring tools such as Langfuse

Describe alternatives you've considered Leave it as it is

Additional context N/A

davidsbatista commented 1 week ago

It's ready for review only an issue with a bug in pylint:

- https://github.com/deepset-ai/haystack/actions/runs/9712238785/job/26806852617?pr=7947

Waiting for the new release of pylint (hopefully soon) to fix it.

FIXED