Closed demdecuong closed 20 hours ago
Hi @demdecuong Are you getting an error similar to the one below:
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/py312_llm_eval/lib/python3.12/site-packages/litellm/main.py", line 986, in completion
optional_params = get_optional_params(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/py312_llm_eval/lib/python3.12/site-packages/litellm/utils.py", line 3300, in get_optional_params
if response_format is not None and response_format["type"] == "json_object":
~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/py312_llm_eval/lib/python3.12/site-packages/pydantic/main.py", line 700, in __class_getitem__
raise TypeError(f'{cls} cannot be parametrized because it does not inherit from typing.Generic')
TypeError: <class 'opik.evaluation.metrics.llm_judges.hallucination.metric.HallucinationResponseFormat'> cannot be parametrized because it does not inherit from typing.Generic
I think it's related to this issue: https://github.com/comet-ml/opik/issues/663 and the fact that we assume the model supports structured outputs
Hi @demdecuong! It was a bug in litellm library. They fixed it. So you can try updating litellm and try again.
I want to use qwen2.5:3b as a LLM judgment. I read the docs but I still stuck with this
This is my example code