microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
9.42k stars 855 forks source link

[BUG] ValueError: Missing required inputs for target : ['question'] while using evalutor_config #3516

Closed yanggaome closed 3 months ago

yanggaome commented 3 months ago

Describe the bug A clear and concise description of the bug.

I am using evaluator_config to configure a column mapping for the data e.g. the data provide has {"test_question":xxx} in jsonl data file following the example: evaluatesafetyrisks I am using

def user_call(*, question: str, **kwargs):
    xxxx

target = user_call
data=data_path, # {"test_question":xxx}
evaluator_config={
                "violence": {"question": "${data.test_question}"},
                "sexual": {"question": "${data.test_question}"},
                "self_harm": {"question": "${data.test_question}"},
                "hate_unfairnes": {"question": "${data.test_question}"},
                "content_safety": {"question": "${data.test_question}"}
                },

However, this is still complaining it cannot find the question input.

Traceback (most recent call last): ... File "/anaconda/envs/azureml_py38/lib/python3.9/site-packages/promptflow/evals/evaluate/_evaluate.py", line 337, in evaluate _validate_columns(input_data_df, evaluators, target, evaluator_config) File "/anaconda/envs/azureml_py38/lib/python3.9/site-packages/promptflow/evals/evaluate/_evaluate.py", line 143, in _validate_columns _validate_input_data_for_evaluator(target, None, df, is_target_fn=True) File "/anaconda/envs/azureml_py38/lib/python3.9/site-packages/promptflow/evals/evaluate/_evaluate.py", line 81, in _validate_input_data_for_evaluator raise ValueError(f"Missing required inputs for target : {missing_inputs}.") ValueError: Missing required inputs for target : ['question'].

How To Reproduce the bug Steps to reproduce the behavior, how frequent can you experience the bug: 1.

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Running Information(please complete the following information):

{
  "promptflow": "1.13.0",
  "promptflow-azure": "1.13.0",
  "promptflow-core": "1.13.0",
  "promptflow-devkit": "1.13.0",
  "promptflow-evals": "0.3.0",
  "promptflow-tracing": "1.13.0"
}

Executable '/anaconda/envs/azureml_py38/bin/python'
Python (Linux) 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) 
[GCC 12.3.0]

Additional context Add any other context about the problem here.

singankit commented 3 months ago

Thanks @yanggaome for creating the bug. The evaluator_config is used for mapping data (input data + data generated by target) to the evaluators input. However this error is happening when evaluate API calls target (user_call ) with data (data_path). For case where input data + target are provided input data should have all the fields needs by target.

yanggaome commented 3 months ago

thanks @singankit , that answered my question! thank you

following up on this, for API calls target with data (user_call), i think this user_call only accepts inputs from data, is there a way to pass in additional variables, e.g. some configs?

here is the issue I created: https://github.com/microsoft/promptflow/issues/3526

singankit commented 3 months ago

Closing this bug in favor of #3526