Azure-Samples / agent-python-openai-prompty-langchain

Function calling for vector database lookup based on user question
MIT License
8 stars 5 forks source link

Function Calling with Prompty, LangChain, and Elastic Search

In this sample, we utilize the new Prompty tool, Langchain, and Elasticsearch to build a large language model (LLM) search agent. This agent with Retrieval-Augmented Generation (RAG) technology is capable of answering user questions based on the provided data by integrating real-time information retrieval with generative responses.

By the end of deploying this template, you should be able to:

  1. Describe the integration and functionality of Azure's Promptly, Langchain, and Elasticsearch within the LLM search agent.
  2. Explain how Retrieval-Augmented Generation (RAG) enhances the search capabilities of the agent.
  3. Build, run, evaluate, and deploy the LLM search agent to Azure.

Features

This project framework provides the following features:

Architecture Diagram

architecture-diagram-prompty-elasticsearch

Getting Started

Azure Account

IMPORTANT: In order to deploy and run this example, you'll need:

Once you have an Azure account you have two options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up [locally]() if desired.

Elasticsearch Account

Go to Elasticsearch and create your account if you don't have one. For this quickstart template, please create a index in your Elasticsearch account named langchain-test-index. Keep your Elasticsearch encoded API key in a safe place and you will need to pass it for this template.

Security requirements

The Elastic Search tool does not support Microsoft Managed Identity now. It is recommended to use Azure Key Vault to secure your API keys.

Project setup

You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally if desired.

GitHub Codespaces

You can run this repo virtually by using GitHub Codespaces, which will open a web-based VS Code in your browser:

Open in GitHub Codespaces

Once the codespace opens (this may take several minutes), open a terminal window.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)
  2. Open the project: Open in Dev Containers
  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

    Local Environment

  1. Clone the repository and intialize the project:

    azd init agent-python-openai-prompty-langchain

    Note that this command will initialize a git repository, so you do not need to clone this repository.

  2. Login to your Azure account:

    azd auth login
  3. Set following environment variables: ELASTICSEARCH_ENDPOINT and ELASTICSEARCH_API_KEY

  4. Create a new azd environment:

    azd env new

    Enter a name that will be used for the resource group. This will create a new folder in the .azure folder, and set it as the active environment for any calls to azd going forward.

  5. Provision and deploy the project to Azure: azd up

  6. Set up CI/CD with azd pipeline config

  7. Talk to your agent: please take the validate_deployment.ipynb as reference.

Local Development

Prerequisite

Dependency requirements

Go to src\prompty-langchain-agent folder and do followings:

  1. use poetry to install all dependency for the app.

poetry install --no-interaction --no-ansi

  1. use poetry to install all dependency for the packages:

Go to packages\openai-functions-agent and run: poetry install --no-interaction --no-ansi

  1. set environment variables
AZURE_OPENAI_ENDPOINT= <your aoai endpoint>
OPENAI_API_VERSION= <your aoai api version>
AZURE_OPENAI_DEPLOYMENT= <your aoai deployment name for chat>
AZURE_OPENAI_EMBEDDING_DEPLOYMENT= <your aoai deployment name for embedding>
ELASTICSEARCH_ENDPOINT = <your ELASTICSEARCH ENDPOINT>
ELASTICSEARCH_API_KEY= <Your encoded ELASTICSEARCH API key>
  1. Now try to run it on your local langchian serve

  2. you can go to http://localhost:8000/openai-functions-agent/playground/ to test.

  3. you can mention your index in input to tell agent to use search tool.

Clean up

To clean up all the resources created by this sample:

  1. Run azd down
  2. When asked if you are sure you want to continue, enter y
  3. When asked if you want to permanently delete the resources, enter y

The resource group and all the resources will be deleted.

Costs

You can estimate the cost of this project's architecture with Azure's pricing calculator

Securtiy Guidelines

We recommend using keyless authentication for this project. Read more about why you should use managed identities on our blog.

Resources

Langsmith

We do support Langsmith and you can follow their doc make it work.

Get started with LangSmith

langsmith