This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public developer docs at https://docs.microsoft.com/python/azure/ or our versioned developer docs at https://azure.github.io/azure-sdk-for-python.
MIT License
4.37k
stars
2.71k
forks
source link
Unable to use evaluate() inside Dev Container on Mac #35061
Operating System: VS Code Dev Container running in a Mac OSX M1
Python Version: 3.11
Describe the bug
Here's how I call evaluate-
results = evaluate(
target=wrap_target,
data=testdata,
task_type="qa",
metrics_list=[metric.get_metric() for metric in requested_metrics],
model_config=openai_config,
data_mapping={
# The keys in this dictionary must match the variable names of the built-in prompt templates
# These values must match field names in qa.jsonl:
"question": "question", # column of data providing input to model
"ground_truth": "truth", # column of data providing ground truth answer, optional for default metrics
# These values must match field names in return value of target function:
"context": "context", # column of data providing context for each input
"answer": "answer", # column of data providing output from model
},
tracking=False,
output_path=results_dir,
)
where results_dir is a value like "/workspaces/ai-rag-chat-evaluator/example_results/experiment1712165387", a path that's valid inside the dev container.
When I call evaluate(), I get an error when mlflow tries to save the artifact:
File "/workspaces/ai-rag-chat-evaluator/scripts/evaluate.py", line 120, in run_evaluation
results = evaluate(
^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/azure/core/tracing/decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/azure/ai/ml/_telemetry/activity.py", line 291, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/azure/ai/generative/evaluate/_evaluate.py", line 258, in evaluate
evaluation_result = _evaluate(
^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/azure/ai/generative/evaluate/_evaluate.py", line 312, in _evaluate
with mlflow.start_run(nested=mlflow.active_run(), run_name=evaluation_name) as run, RedirectUserOutputStreams(
File "/home/vscode/.local/lib/python3.11/site-packages/azure/ai/generative/evaluate/_mlflow_log_collector.py", line 58, in __exit__
mlflow.log_artifact(self.user_log_path, "user_logs")
File "/home/vscode/.local/lib/python3.11/site-packages/mlflow/tracking/fluent.py", line 1057, in log_artifact
MlflowClient().log_artifact(run_id, local_path, artifact_path)
File "/home/vscode/.local/lib/python3.11/site-packages/mlflow/tracking/client.py", line 1189, in log_artifact
self._tracking_client.log_artifact(run_id, local_path, artifact_path)
File "/home/vscode/.local/lib/python3.11/site-packages/mlflow/tracking/_tracking_service/client.py", line 560, in log_artifact
artifact_repo.log_artifact(local_path, artifact_path)
File "/home/vscode/.local/lib/python3.11/site-packages/mlflow/store/artifact/local_artifact_repo.py", line 37, in log_artifact
mkdir(artifact_dir)
File "/home/vscode/.local/lib/python3.11/site-packages/mlflow/utils/file_utils.py", line 212, in mkdir
raise e
File "/home/vscode/.local/lib/python3.11/site-packages/mlflow/utils/file_utils.py", line 209, in mkdir
os.makedirs(target)
File "<frozen os>", line 215, in makedirs
File "<frozen os>", line 215, in makedirs
File "<frozen os>", line 215, in makedirs
[Previous line repeated 4 more times]
File "<frozen os>", line 225, in makedirs
PermissionError: [Errno 13] Permission denied: '/Users'
It's trying to save to a path like /Users/pamelafox/ai-rag-chat-evaluator, but that path does not exist in the dev container.
Describe the bug
Here's how I call evaluate-
where results_dir is a value like "/workspaces/ai-rag-chat-evaluator/example_results/experiment1712165387", a path that's valid inside the dev container.
When I call evaluate(), I get an error when mlflow tries to save the artifact:
It's trying to save to a path like /Users/pamelafox/ai-rag-chat-evaluator, but that path does not exist in the dev container.
To Reproduce Steps to reproduce the behavior:
python -m scripts evaluate --config=example_config.json --numquestions=1
Expected behavior
I do not expect an error.