Closed sofyan-ajridi-ey closed 5 months ago
Can you confirm you have the most recent version of this repo? I added "encoding=utf-8" to all the file open calls to fix a similar issue, and want to make sure that's the version you're using.
I just double checked: everything is up to date including "encoding=utf-8" in all the open() calls
Hm. I've tested with the sample data you gave here, and am unable to replicate the error. Does the error happen if you only do the first two questions? Or does it happen on a later question? I'm trying to figure out if the issue is with the encoding/characters of your input data or off the target endpoint response.
got the same issue and posted the messages here: https://github.com/Azure-Samples/ai-rag-chat-evaluator/issues/32#issuecomment-1925417941
Just check if your Dev Container runs properly, that fixed my problem together with the specification of UTF-8. this is a compatibility problem with Windows and linux. (WSL should work)
Hm. I've tested with the sample data you gave here, and am unable to replicate the error. Does the error happen if you only do the first two questions? Or does it happen on a later question? I'm trying to figure out if the issue is with the encoding/characters of your input data or off the target endpoint response.
It happens instantly (I assume after the first question). I tested it out with another set of questions and I have the same issue.
Just check if your Dev Container runs properly, that fixed my problem together with the specification of UTF-8. this is a compatibility problem with Windows and linux. (WSL should work)
Would like to try it out, but working on a corporate pc without docker license
Could any of you try adding this line after requests.post() in evaluate.py?
r = requests.post(target_url, headers=headers, json=body)
r.encoding = "utf-8"
It explicitly sets the encoding to UTF-8. I'm hoping the issue is that it's detecting a different encoding on Windows, and we just need to override that to specify UTF-8. Unfortunately I'm on a Mac so I have yet to be able to replicate it personally.
Sadly, that didn't fix it. Same error:
024-02-05 12:58:17 (WARNING) azureml.metrics.common.llm_connector._openai_connector: Computing gpt based metrics failed with the exception : 'charmap' codec can't encode characters in position 6-47: character maps to <undefined> 2024-02-05 12:58:17 (ERROR) azureml.metrics.common._scoring: Scoring failed for QA metric gpt_coherence 2024-02-05 12:58:17 (ERROR) azureml.metrics.common._scoring: Class: NameError Message: name 'NotFoundError' is not defined
Am also facing the same issue.
OS Type: Windows IDE : Visual Studio Code Python Version : 3.10
Executing the script locally without the container - Both my RAG Chat service and evaluation script runs in my local
Command: python -m scripts evaluate --config=example_config.json --numquestions=2
Am also facing the same issue.
OS Type: Windows IDE : Visual Studio Code Python Version : 3.10
Executing the script locally without the container - Both my RAG Chat service and evaluation script runs in my local
Command: python -m scripts evaluate --config=example_config.json --numquestions=2
You must run the RAG EVAL scripts in a dev container (Ubuntu or similar). Otherwise it will not work. The easiest way is to pull the repository directly via VS Code, a dev container should be started automatically. Otherwise, start a container directly and execute the evaluation there.
Okay, I would like to get this working outside of a dev container as well, so I will see if I can work with a colleague with a Windows machine to find a fix.
Okay, I would like to get this working outside of a dev container as well, so I will see if I can work with a colleague with a Windows machine to find a fix.
Hey @pamelafox , Any updates on this ?
I now have a Windows machine! I'm working on replicating the issue now.
Okay, so I replicated the encoding error, and then I merged my most recent PR that upgraded the azure-ai-generative SDK, and now I no longer see the error. Can you all try the latest main and see if it's working for you?
Good news, latest PR fixes the issue on my end! Thank you for your quick response.
Phew! Closing this. Thanks for confirming!
@pamelafox @sofyan-ajridi-ey - Latest merge got my issue fixed , Thanks Guys
This issue is fixed on my end as well. Thanks @pamelafox
This issue is for a: (mark with an
x
)Minimal steps to reproduce
When I then try to run the evaluate command, it first sends a test question which goes fine:
But then it fails and I get the following error messages:
eval_results.jsonl also contains the following:
Any log messages given by the failure
Expected/desired behavior
OS and Version?
Versions
Mention any other details that might be useful