This repository is getting retired and no longer will be maintained. No further updates, bug fixes, or support will be provided. We recommend referring to alternative projects with ongoing updates and support.
This accelerator provides a no code Studio for users to quickly build complex, multi-stage AI pipelines across multiple Azure AI and ML Services. Users can select, and stack, AI/ML Services from across Azure Cognitive Services (OpenAI, Speech, Language, Form Recognizer, ReadAPI), Azure Machine Learning into a single, fully integrated pipeline. Integration between services is automated by BPA, and once deployed, a web app is created. This customizable UI* provides and drag-n-drop interface for end users to build multi service pipelines. Finally, the user-created pipeline is triggered as soon as the first input file(s) are uploaded, storing the results in a Azure Blob Storage.
Video links currently are not available. Please submit an issue or reach out to contributors if need support.
Software and tools:
Github account (Admin)
Azure Resource Group (Owner)
Ensure your subscription has Microsoft.DocumentDB enabled
To confirm/enable:
Ensure that you have accepted terms and conditions for Responsible AI:
You must initiate the creation of a "Cognitive services multi-service account" from the Azure portal to review and acknowledge the terms and conditions. You can do so here: Quickstart: Create a Cognitive Services resource using the Azure portal.
Once accepted, you can create subsequent resources using any deployment tool (SDK, CLI, or ARM template, etc) under the same Azure subscription.
Fork the repository to a git account of which you are the Admin.
Click on the "Deploy to Azure" Button that corresponds to your environment and which patterns you wish to create. Redis pattern is only required for Vector Search.
Only the Resource Group, Forked Repo Personal Access Token (Workflow Level), and Forked Git Repo Url are needed. The remaining parameters are filled in for you.
If your function app does not show any functions after running the scripts above, follow the steps below to deploy manually.
cd src/backend/api/
npm install
and npm run build
to install dependencies and translate typescript files into javascriptsrc/backend/api/
)WEBSITE_RUN_FROM_PACKAGE
- update value to 1func azure functionapp publish $JS_FUNCTION_APP_NAME --javascript --force --deployment-source-zip $JS_ZIP_FILE_PATH
, where $JS_FUNCTION_APP_NAME is name of BPA function resource and $JS_ZIP_FILE_PATH is a path to zipped archive from 'api' directoryfunc azure functionapp publish $HF_FUNCTION_APP_NAME --python --build remote --force --deployment-source-zip $HF_ZIP_FILE_PATH
, where $HF_FUNCTION_APP_NAME is name of Hagging Face function resource and $HF_ZIP_FILE_PATH is a path to zipped archive from 'huggingface' directory(Deprecated. Replaced by Cognitive Search with Vector Search Features)
Once you've created a high-level Resource Group, you'll fork this repository and importing helper libraries, taking advantage of Github Actions to deploy the set of Azure Cognitive Services and manage all of the new Azure module credentials, in the background, within your newly created pipeline. After pipeline deployment, a static webapp will be created with your newly customizable POC UI for building and triggering pipelines.
Document Ingestion High-level Technical Architecture
Several Sample Pipelines/Patterns Easily Created via the UI's drag-n-drop Interface
Pipeline #1: Two examples for creating a quick pipeline for ingesting text data, and then adding any of the Azure Language Services to process your text (See all Azure Language Services Offerings here!), before visualizing in provided WebApp. The second pattern starts from ingestion of video data, and then adding any of our Video Analyzing Services to process your video! (See all Azure Video Analyzer Services Offerings here!)
Pipeline #2: Quickly create a pipeline leveraging multiple Cognitive Services. In this sample pipeline, you can ingest audio, transcriber or transliterate with the Azure Speech Service See all available Azure Speech Services here!, the resulting text output will be further extracted / transformed with Azure Language Service, and add another analysis layer with Azure OpenAI models.
Our third sample pipeline is inspired from the popular Enterprise ChatGPT demo, providing the backend and popular UI for creating a ChatGPT-like experience over your own data.
Pipeline #4: Popular approach for information retrieval with your own documents using a vector store and ChatGPT! Document chunking and vector store implementation is handled by the backend after you create your own pipeline.