Azure / azure-sdk

This is the Azure SDK parent repository and mostly contains documentation around guidelines and policies as well as the releases for the various languages supported by the Azure SDK.
http://azure.github.io/azure-sdk
MIT License
477 stars 295 forks source link

Azure SDK Review - [Introduction to Azure AI Studio] #7903

Open azure-sdk opened 3 weeks ago

azure-sdk commented 3 weeks ago

New SDK Review meeting has been requested.

Service Name: Azure AI Studio Review Created By: Neehar Duvvuri Review Date: 08/22/2024 02:05 PM PT

Release Plan:1285 Hero Scenarios Link: Not Provided Architecture Diagram Link: Not Provided Core Concepts Doc Link: Not Provided APIView Links: Python,

Description: The Azure AI team proposes to introduce the package azure-ai-evals. This package already exists as promptflow-evals, but we are aiming to deprecate that package and rename it to azure-ai-evals. An API view for the current version of promptflow-evals has been provided below, and reference documentation can be found here: https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.html.

This package will allow customers to evaluate their LLM applications for various quality metrics, such as groundedness, coherence, content safety and more. This package will also allow customers to simulate various adversarial/jailbreak scenarios against their LLM application to test its robustness.

Detailed meeting information and documents provided can be accessed here

azure-sdk commented 3 weeks ago

Meeting updated by Neehar Duvvuri

Service Name: Azure AI Studio Review Created By: Neehar Duvvuri Review Date: 08/22/2024 02:05 PM PT

Hero Scenarios Link: here Architecture Diagram Link: Not Provided Core Concepts Doc Link: here APIView Links: Python,

Description: The Azure AI team proposes to introduce the package azure-ai-evals. This package already exists as promptflow-evals, but we are aiming to deprecate that package and rename it to azure-ai-evals. An API view for the current version of promptflow-evals has been provided below, and reference documentation can be found here: https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.html.

This package will allow customers to evaluate their LLM applications for various quality metrics, such as groundedness, coherence, content safety and more. This package will also allow customers to simulate various adversarial/jailbreak scenarios against their LLM application to test its robustness.

Detailed meeting information and documents provided can be accessed here