Closed AidanNell closed 1 week ago
Hello @AidanNell, thanks for reporting this issue.
It seems to be just a random error, in which the LLM client appended {'answer':
instead of outputting the JSON with correctness
and correctness_reason
as the first keys. Usually, trying again should work well.
Also, could you share which model are you trying to use? If you are not using gpt-4o
, we recommend you to use it, as it provides better results.
Issue Type
Bug
Source
source
Giskard Library Version
2.5.1
OS Platform and Distribution
No response
Python version
3.9.11
Installed python packages
No response
Current Behaviour?
instead you get the following
Standalone code OR list down the steps to reproduce the issue