name: Contoso Chat Retail with Azure AI Studio and Promptflow description: A retail copilot that answers customer queries with responses grounded in retailer's product and customer data. languages:
This sample creates a customer support chat agent for an online retailer called Contoso Outdoors. The solution uses a retrieval-augmented generation pattern to ground responses in the company's product and customer data. Customers can ask questions about the retailer's product catalog, and also get recommendations based on their prior purchases.
In this sample we build, evaluate and deploy a customer support chat AI for Contoso Outdoors, a fictitious retailer who sells hiking and camping equipment. The implementation uses a Retrieval Augmented Generation (RAG) architecture to implement a retail copilot solution that responds to customer queries with answers grounded in the company's product catalog and customer purchase history.
The sample uses Azure AI Search to create and manage search indexes for product catalog data, Azure Cosmos DB to store and manage customer purchase history data, and Azure OpenAI to deploy and manage the core models required for our RAG-based architecture.
By exploring and deploying this sample, you will learn to:
The project comes with:
This is also a signature sample for demonstrating new capabilities in the Azure AI platform. Expect regular updates to showcase cutting-edge features and best practices for generative AI development.
The Contoso Chat application implements a retrieval augmented generation pattern to ground the model responses in your data. The architecture diagram below illustrates the key components and services used for implementation and highlights the use of Azure Managed Identity to reduce developer complexity in managing sensitive credentials.
🌟 | Watch for a video update showing how easy it is to go from code to cloud using this template and the Azure Developer CLI for deploying your copilot application.
This has been the signature sample used to showcase end-to-end development of a copilot application code-first on the Azure AI platform. It has been actively used for training developer audiences and industry partners at key events including Microsoft AI Tour and Microsoft Build. Use the links below to reference specific versions of the sample corresponding to a related workshop or event session.
Version Description v0 : #cc2e808 Microsoft AI Tour 2023-24 (dag-flow, jnja template) - Skillable Lab v1 : msbuild-lab322 Microsoft Build 2024 (dag-flow, jnja template) - Skillable Lab v2 : main Latest version (flex-flow, prompty asset)- Azure AI Template
You will also need:
You have three options for getting started with this template:
We recommend using GitHub Codespaces for the fastest start with least effort. However, we have provided instructions for all three options below.
Click the button to launch this repository in GitHub Codespaces.
This opens a new browser tab with setup taking a few minutes to complete. Once ready, you should see a Visual Studio Code editor in your browser tab, with a terminal open.
Sign into your Azure account from the VS Code terminal
azd auth login --use-device-code
This is a related option that opens the project in your local VS Code using the Dev Containers extension instead. This is a useful alternative if your GitHub Codespaces quota is low, or you need to work offline.
Open the project by clickjing the button below:
azd auth login
pip install -r requirements.txt
winget install microsoft.azd
curl -fsSL https://aka.ms/install-azd.sh | bash
brew tap azure/azd && brew install azd
Sign into your Azure account from the VS Code terminal
azd auth login
Provision and deploy your application to Azure. You will need to specify a valid subscription, deployment location, and environment name.
azd up
Deployments
to track the status of the provisioning processDeployments
to track the status of the application deploymentchat-deployment-xx
endpoint listedTest
tab for a built-in testing sandboxInput
box, enter a new query in this format and submit it:
{"question": "Tell me about hiking shoes", "customerId": "2", "chat_history": []}
You can find your deployed retail copilot's Endpoint and Primary Key information on the deployment details page in the last step. Use them to configure your preferred front-end application (e.g., web app) to support a customer support chat UI capability that interacts with the deployed copilot in real time.
This sample contains an example chat.prompty asset that you can explore, to understand this new capability. The file has the following components:
name
of the applicationdescription
of the application functionalityauthors
of the application (one per line)model
description (with these parameters)
api
type of endpoint (can be chat or completion)configuration
parameters including
type
of connection (azure_openai or openai)parameters
(max_tokens, temperature, response_format)inputs
- each with type and optional default valueoutputs
- specifying a type (e.g., string)sample
- an example of the inputs (e.g., for testing)system
context (defining the agent persona and behavior)
#Safety
section enforcing responsible AI requirements#Documentation
section with template for filling product documentation#Previous Orders
section with template for filling relevant history#Customer Context
section with template for filling customer detailsquestion
section to embed user queryInstructions
section to reference related product recommendationsThis specific prompty takes 3 inputs: a customer
object, a documentation
object (that could be chat history) and a question
string that represents the user query. You can now load, execute, and trace individual prompty assets for a more granular prompt engineering solution.
This sample uses a flex-flow feature that lets you "create LLM apps using a Python class or function as the entry point" - making it easier to test and run them using a code-first experience.
chat_request.py
You can now test the flow in different ways:
pf flow test --flow ...
pf flow test --flow ... --ui
🌟 | Watch this space for more testing guidance.
This template uses gpt-35-turbo
for chat completion, gpt-4
for chat evaluation and text-embedding-ada-002
for vectorization. These models may not be available in all Azure regions. Check for up-to-date region availability and select a region accordingly.
This template uses the Semantic Ranker
feature of Azure AI Search which may be available only in certain regions. Check for up-to-date region availability and select a region accordingly.
sweden-central
for the OpenAI Modelseastus
for the Azure AI Search Resource[!NOTE] The default azd deploy takes a single
location
for deploying all resources within the resource group for that application. We set the default Azure AI Search location toeastus
(ininfra/
configuration), allowing you to now use the default location setting to optimize for model availability and capacity in region.
Pricing for services may vary by region and usage and exact costs cannot be estimated. You can estimate the cost of this project's architecture with Azure's pricing calculator with these services:
This template uses Managed Identity for authentication with key Azure services including Azure OpenAI, Azure AI Search, and Azure Cosmos DB. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. This also removes the need for developers to manage these credentials themselves and reduces their complexity.
Additionally, we have added a GitHub Action tool that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure best practices we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repo.
Have questions or issues to report? Please open a new issue after first verifying that the same question or issue has not already been reported. In the latter case, please add any additional comments you may have, to the existing issue.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.