Open domciakocan opened 3 months ago
I received a similar error when running ContexturalPrecisionMetric, and it appears there was another error that lead to this one:
File ~/anaconda3/envs/llamaindex/lib/python3.12/site-packages/deepeval/metrics/contextual_precision/contextual_precision.py:189, in ContextualPrecisionMetric._a_generate_verdicts(self, input, expected_output, retrieval_context)
188 try:
--> 189 res: Verdicts = await self.model.a_generate(
190 prompt, schema=Verdicts
191 )
192 verdicts = [item for item in res.verdicts]
TypeError: object Verdicts can't be used in 'await' expression
Do you see a similar error at the top of your stacktrace?
Update: Nevermind, this was my own doing. I never added async
to the a_generate
signature.
Describe the bug Running tests for Knowledge Retention (following the documentation: [https://docs.confident-ai.com/docs/metrics-knowledge-retention]) generates error: TypeError: Claude.generate() missing 1 required positional argument: 'schema'.
To Reproduce Steps to reproduce the behavior:
Expected behavior Code should result in a score for Knowledge Retention metric.
Screenshots
Additional context Other metrics such as hallucination, bias etc. are working properly.