Closed bhonris closed 2 months ago
I have added to similarity.prompty
the following text: "You will respond with a single digit number between 1 and 5. You will include no other text or information", and this seems to fix the issue.
Hi @singankit and @luigiw , could you please help take a look at this issue?
@bhonris , thank you for reporting the issue and sharing a workaround. It is a known issue that some preview OpenAI models will cause NaN results. Please also try with stable version models.
Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!
Fixed in 0.3.2 version.
Describe the bug When running the evaluation a dataset using
evaluate()
using the similarity evaluator I have come across some scenarios where the result is not a number. How To Reproduce the bug Model config{azure_deployment= "gpt4-turbo-preview", api_version="2024-02-01"}
jsonl file{"Question":"How can you get the version of the Kubernetes cluster?","Answer":"{\"code\": \"kubectl version\" }","output":"{code: kubectl version --output=json}"}
Evaluate ConfigExpected behavior Value returned is number
Running Information(please complete the following information):
pf -v
:python --version
: 3.10.11Additional context
_similarity.py
suggests the actual returned value is the string 'The'.{Question: What is the capital of France?, Answer: Washington DC, }