This repo contains scripts and tools for evaluating a chat app that uses the RAG architecture. There are many parameters that affect the quality and style of answers generated by the chat app, such as the system prompt, search parameters, and GPT model parameters.
Whenever you are making changes to a RAG chat with the goal of improving the answers, you should evaluate the results. This repository offers tools to make it easier to run evaluations, plus examples of evaluations that we've run on our sample chat app.
đź“ş Watch a video overview of this repo
Table of contents:
There are several places where this project can incur costs:
Cost | Description | Estimated tokens used |
---|---|---|
Generating ground truth data | This is a one-time cost for generating the initial set of questions and answers, and involves pulling data down from your search index and sending it to the GPT model. | 1000 tokens per question generated, which would be 200,000 tokens for the recommended 200 questions. |
Running evaluations | Each time you run an evaluation, you may choose to use the GPT-based evaluators (groundedness, coherence, etc). For each GPT-evaluator used, you will incur costs for the tokens used by the GPT model. | 1000 tokens per question per evaluator used, which would be 600,000 tokens for the default 200 questions and 3 evaluators. |
For a full estimate of the costs for your region and model, see the Azure OpenAI pricing page or use the Azure OpenAI pricing calculator.
If you open this project in a Dev Container or GitHub Codespaces, it will automatically set up the environment for you. If not, then follow these steps:
Inside that virtual environment, install the project:
python -m pip install -e .
It's best to use a GPT-4 model for performing the evaluation, even if your chat app uses GPT-3.5 or another model. You can either use an Azure OpenAI instance or an openai.com instance.
To use a new Azure OpenAI instance, you'll need to create a new instance and deploy the app to it.
We've made that easy to deploy with the azd
CLI tool.
Install the Azure Developer CLI
Run azd auth login
to log in to your Azure account
Run azd up
to deploy a new GPT-4 instance
Create a .env
file based on .env.sample
:
cp .env.sample .env
Run this commands to get the required values for AZURE_OPENAI_EVAL_DEPLOYMENT
and AZURE_OPENAI_SERVICE
from your deployed resource group and paste those values into the .env
file:
azd env get-value AZURE_OPENAI_EVAL_DEPLOYMENT
azd env get-value AZURE_OPENAI_SERVICE
If you already have an Azure OpenAI instance, you can use that instead of creating a new one.
Create .env
file by copying .env.sample
Fill in the values for your instance:
AZURE_OPENAI_EVAL_DEPLOYMENT="<deployment-name>"
AZURE_OPENAI_ENDPOINT="https://<service-name>.openai.azure.com"
The scripts default to keyless access (via AzureDefaultCredential
), but you can optionally use a key by setting AZURE_OPENAI_KEY
in .env
.
If you have an openai.com instance, you can use that instead of an Azure OpenAI instance.
.env
file by copying .env.sample
Change OPENAI_HOST
to "openai" and fill in the key for for your OpenAI account. If you do not have an organization, you can leave that blank.
OPENAI_HOST="openai"
OPENAICOM_KEY=""
OPENAICOM_ORGANIZATION=""
In order to evaluate new answers, they must be compared to "ground truth" answers: the ideal answer for a particular question. See example_input/qa.jsonl
for an example of the format.
We recommend at least 200 QA pairs if possible.
There are a few ways to get this data:
This repo includes a script for generating questions and answers from documents stored in Azure AI Search.
[!IMPORTANT] The generator script can only generate English Q/A pairs right now, due to limitations in the azure-ai-generative SDK.
Create .env
file by copying .env.sample
Fill in the values for your Azure AI Search instance:
AZURE_SEARCH_ENDPOINT="https://<service-name>.search.windows.net"
AZURE_SEARCH_INDEX="<index-name>"
AZURE_SEARCH_KEY=""
The key may not be necessary if it's configured for keyless access from your account. If providing a key, it's best to provide a query key since the script only requires that level of access.
Run the generator script:
python -m evaltools generate --output=example_input/qa.jsonl --persource=5 --numquestions=200
That script will generate 200 questions and answers, and store them in example_input/qa.jsonl
. We've already provided an example based off the sample documents for this app.
To further customize the generator beyond the numquestions
and persource
parameters, modify scripts/generate.py
.
Optional:
By default this script assumes your index citation field is named sourcepage
, if your search index contains a different citation field name use the citationfieldname
option to specify the correct name
python -m evaltools generate --output=example_input/qa.jsonl --persource=5 --numquestions=200 --citationfieldname=filepath
We provide a script that loads in the current azd
environment's variables, installs the requirements for the evaluation, and runs the evaluation against the local app. Run it like this:
python -m evaltools evaluate --config=example_config.json
The config.json should contain these fields as a minimum:
{
"testdata_path": "example_input/qa.jsonl",
"target_url": "http://localhost:50505/chat",
"requested_metrics": ["groundedness", "relevance", "coherence", "latency", "answer_length"],
"results_dir": "example_results/experiment<TIMESTAMP>"
}
If you're running this evaluator in a container and your app is running in a container on the same system, use a URL like this for the target_url
:
"target_url": "http://host.docker.internal:50505/chat"
To run against a deployed endpoint, change the target_url
to the chat endpoint of the deployed app:
"target_url": "https://app-backend-j25rgqsibtmlo.azurewebsites.net/chat"
It's common to run the evaluation on a subset of the questions, to get a quick sense of how the changes are affecting the answers. To do this, use the --numquestions
parameter:
python -m evaltools evaluate --config=example_config.json --numquestions=2
The evaluate
command will use the metrics specified in the requested_metrics
field of the config JSON.
Some of those metrics are built-in to the evaluation SDK, and the rest are custom metrics that we've added.
These metrics are calculated by sending a call to the GPT model, asking it to provide a 1-5 rating, and storing that rating.
[!IMPORTANT] The built-in metrics are only intended for use on evaluating English language answers, since they use English-language prompts internally. For non-English languages, you should use the custom prompt metrics instead.
gpt_coherence
measures how well the language model can produce output that flows smoothly, reads naturally, and resembles human-like language.gpt_relevance
assesses the ability of answers to capture the key points of the context.gpt_groundedness
assesses the correspondence between claims in an AI-generated answer and the source context, making sure that these claims are substantiated by the context.gpt_similarity
measures the similarity between a source data (ground truth) sentence and the generated response by an AI model.gpt_fluency
measures the grammatical proficiency of a generative AI's predicted answer.f1_score
Measures the ratio of the number of shared words between the model generation and the ground truth answers.The following metrics are implemented very similar to the built-in metrics, but use a locally stored prompt. They're a great fit if you find that the built-in metrics are not working well for you or if you need to translate the prompt to another language.
mycoherence
: Measures how well the language model can produce output that flows smoothly, reads naturally, and resembles human-like language. Based on scripts/evaluate_metrics/prompts/coherence.prompty
.myrelevance
: Assesses the ability of answers to capture the key points of the context. Based on scripts/evaluate_metrics/prompts/relevance.prompty
.mygroundedness
: Assesses the correspondence between claims in an AI-generated answer and the source context, making sure that these claims are substantiated by the context. Based on scripts/evaluate_metrics/prompts/groundedness.prompty
.These metrics are calculated with some local code based on the results of the chat app, and do not require a call to the GPT model.
latency
: The time it takes for the chat app to generate an answer, in seconds.length
: The length of the generated answer, in characters.has_citation
: Whether the answer contains a correctly formatted citation to a source document, assuming citations are in square brackets.citation_match
: Whether the answer contains at least all of the citations that were in the ground truth answer.This repo assumes that your chat app is following the AI Chat Protocol, which means that all POST requests look like this:
{"messages": [{"content": "<Actual user question goes here>", "role": "user"}],
"context": {...},
}
Any additional app parameters would be specified in the context
of that JSON, such as temperature, search settings, prompt overrides, etc. To specify those parameters, add a target_parameters
key to your config JSON. For example:
"target_parameters": {
"overrides": {
"semantic_ranker": false,
"prompt_template": "<READFILE>example_input/prompt_refined.txt"
}
}
The overrides
key is the same as the overrides
key in the context
of the POST request.
As a convenience, you can use the <READFILE>
prefix to read in a file and use its contents as the value for the parameter.
That way, you can store potential (long) prompts separately from the config JSON file.
The evaluator needs to know where to find the answer and context in the response from the chat app. If your app returns responses following the recommendations of the AI Chat Protocol, then the answer will be "message": "content" and the context will be a list of strings in "context": "data_points": "text".
If your app returns responses in a different format, you can specify the JMESPath expressions to extract the answer and context from the response. For example:
"target_response_answer_jmespath": "message.content",
"target_response_context_jmespath": "context.data_points.text"
The results of each evaluation are stored in a results folder (defaulting to example_results
).
Inside each run's folder, you'll find:
eval_results.jsonl
: Each question and answer, along with the GPT metrics for each QA pair.parameters.json
: The parameters used for the run, like the overrides.summary.json
: The overall results, like the average GPT metrics.config.json
: The original config used for the run. This is useful for reproducing the run.To make it easier to view and compare results across runs, we've built a few tools,
located inside the review-tools
folder.
To view a summary across all the runs, use the summary
command with the path to the results folder:
python -m evaltools summary example_results
This will display an interactive table with the results for each run, like this:
To see the parameters used for a particular run, select the folder name. A modal will appear with the parameters, including any prompt override.
To compare the answers generated for each question across 2 runs, use the compare
command with 2 paths:
python -m evaltools diff example_results/baseline_1 example_results/baseline_2
This will display each question, one at a time, with the two generated answers in scrollable panes, and the GPT metrics below each answer.
]
Use the buttons at the bottom to navigate to the next question or quit the tool.
You can also filter to only show questions where the value changed for a particular metric, like this:
python -m evaltools diff example_results/baseline_1 example_results/baseline_2 --changed=has_citation
The evaluation flow described above focused on evaluating a model’s answers for a set of questions that could be answered by the data. But what about all those questions that can’t be answered by the data? Does your model know how to say “I don’t know?” The GPT models are trained to try and be helpful, so their tendency is to always give some sort of answer, especially for answers that were in their training data. If you want to ensure your app can say “I don’t know” when it should, you need to evaluate it on a different set of questions with a different metric.
For this evaluation, our ground truth data needs to be a set of question whose answer should provoke an "I don’t know" response from the data. There are several categories of such questions:
You can write these questions manually, but it’s also possible to generate them using a generator script in this repo, assuming you already have ground truth data with answerable questions.
python -m evaltools generate-dontknows --input=example_input/qa.jsonl --output=example_input/qa_dontknows.jsonl --numquestions=45
That script sends the current questions to the configured GPT-4 model along with prompts to generate questions of each kind.
When it’s done, you should review and curate the resulting ground truth data. Pay special attention to the "unknowable" questions at the top of the file, since you may decide that some of those are actually knowable, and you may want to reword or rewrite entirely.
This repo contains a custom GPT metric called "dontknowness" that rates answers from 1-5, where 1 is "answered the question completely with no certainty" and 5 is "said it didn't know and attempted no answer". The goal is for all answers to be rated 4 or 5.
Here's an example configuration JSON that requests that metric, referencing the new ground truth data and a new output folder:
{
"testdata_path": "example_input/qa_dontknows.jsonl",
"results_dir": "example_results_dontknows/baseline",
"requested_metrics": ["dontknowness", "answer_length", "latency", "has_citation"],
"target_url": "http://localhost:50505/chat",
"target_parameters": {
},
"target_response_answer_jmespath": "message.content",
"target_response_context_jmespath": "context.data_points.text"
}
We recommend a separate output folder, as you'll likely want to make multiple runs and easily compare between those runs using the review tools.
Run the evaluation like this:
python -m evaltools evaluate --config=dontknows.config.json
The results will be stored in the results_dir
folder, and can be reviewed using the review tools.
If the app is not saying "I don't know" enough, you can use the diff
tool to compare the answers for the "dontknows" questions across runs, and see if the answers are improving. Changes you can try: