dotnet / aspire

An opinionated, cloud ready stack for building observable, production ready, distributed applications in .NET
https://learn.microsoft.com/dotnet/aspire
MIT License
3.37k stars 348 forks source link

Treating Azure OpenAI Deployments as resources #3089

Open aaronpowell opened 3 months ago

aaronpowell commented 3 months ago

With the AOAI integration we can add a service resource into the AppHost like so:

var openai = builder.AddAzureOpenAI("openai")
                    .AddDeployment(new("gpt-35-turbo", "gpt-35-turbo", "0613"));

builder.AddProject<Projects.OpenAIEndToEnd_WebStory>("webstory")
       .WithReference(openai);

But within our client project we're going to need to know the name of the deployment to invoke it in the SDK:

    private ChatCompletionsOptions chatCompletionOptions = new ChatCompletionsOptions()
    {
        DeploymentName = "gpt-35-turbo",
        Messages =
        {
            new ChatRequestSystemMessage("Pick a random topic and write a sentence of a fictional story about it.")
        }
    };

Since the name of the deployment is any arbitrary string there's the possibility of it being miskeyed in the client application. My workaround approach to this is to have code like so:

.AddDeployment(new(builder.Configuration["Azure:AI:ChatDeploymentName"] ?? "gpt-4", "gpt-4", "1106"))

Where I use the config or fallback to a known default, but I then need to ensure that the config value is passed to the client using WithEnvironment.

It'd be convenient if there was a way that the deployment names were flowed into the client application, either allowing them to be created as parameters with AddParameter (and then passing that into the AzureOpenAIDeployment constructor) or when theAzureOpenAIResource` is passed to the projects as a resource the deployment info is also passed (probably as a different environment variable) - although the discovery of this could be challenging and it'd fall down if you're using a connection string for local dev like this:

var ai = builder.ExecutionContext.IsPublishMode ?
    builder.AddAzureOpenAI(ServiceNames.AzureOpenAI)
        .AddDeployment(new(builder.Configuration["Azure:AI:ChatDeploymentName"] ?? "gpt-4", "gpt-4", "1106"))
        .AddDeployment(new(builder.Configuration["Azure:AI:EmbeddingDeploymentName"] ?? "text-embedding-ada-002", "text-embedding-ada-002", "2")) :
    builder.AddConnectionString(ServiceNames.AzureOpenAI);

Since in this scenario the deployment info isn't available until provisioning happens.

mitchdenny commented 3 months ago

Looping in @tg-msft.

This is one of the areas where having "deep linked connection strings" are really helpful. When you define a database resource via Aspire like the following:

var builder = DistributedApplication.CreateBuilder(args);
var sqldb = builder.AddSqlServer("sql").AddDatabase("sqldb");
builder.AddProject<Projects.MyApp>("myapp")
       .WithReference(sqldb);

The connection string ConnectionStrings__sqldb will end up being "deep" in that it connects directly to the database, and the SqlConnection we inject ends up being pre-wired to the database.

With Open AI this isn't the case. We get a connection to the cog services account, but the deployment name and all other connection information is something that the developer needs to plumb through themselves. Aaron illustrates how this creates extra work because he is trying to come up with a creative way of defining the deployment name once and plumbing it through.

I'm a big proponent of two things:

  1. Providing a single string that can be used to connect to a resource, and in the case where you need a child resource to actually do anything useful, identifiers for that should be on the connection string.
  2. When the connection string is consumed, a resource should be injected into DI that is pre configured to point to that deeply linked location.

In the case of Azure Open AI I would like to see a connection string format defined that:

  1. Specifies the endpoint
  2. Specifies the deployment name
  3. Specifies the authentication method
  4. (optional) key if managed identity isn't used.

Then I would like to see an OpenAIDeploymentClient which is preconfigured to talk to a particular deployment.

mitchdenny commented 3 months ago

Glad you raised this @aaronpowell ... I was writing up an example of how you could plumb through the deployment name and found a bug. Fixing that now and will share a snippet of code with you in a sec.

mitchdenny commented 3 months ago

OK here is a code sample that I put together as part of this bug fix: #3092

var openai = builder.AddAzureOpenAI("openai", (_, _, _, deployments) => {
    var deployment = deployments.Single();
    deployment.AddOutput("modelName", x => x.Name);
}).AddDeployment(new("gpt-35-turbo", "gpt-35-turbo", "0613"));

builder.AddProject<Projects.OpenAIEndToEnd_WebStory>("webstory")
       .WithReference(openai)
       .WithEnvironment("OpenAI__DeploymentName", openai.GetOutput("modelName"));
aaronpowell commented 3 months ago

That's an interesting idea, using the outputs in that manner. I wonder if there's a way that we could automatically wire up the outputs across the pipeline

mitchdenny commented 3 months ago

My idea was silly and unncessary, just do this:

var deploymentAndModelName = "gpt-35-turbo";
var openai = builder.AddAzureOpenAI("openai").AddDeployment(
    new(deploymentAndModelName, deploymentAndModelName, "0613")
    );

builder.AddProject<Projects.OpenAIEndToEnd_WebStory>("webstory")
       .WithReference(openai)
       .WithEnvironment("OpenAI__DeploymentName", deploymentAndModelName);
mitchdenny commented 3 months ago

One of the benefits of Aspire is you can use plain old C# :)

mitchdenny commented 3 months ago

Proposing we close this if you are happy @aaronpowell

aaronpowell commented 3 months ago

While yes, that approach solves the problem (it's essentially what I do), I wonder if it can't be done is a more automatic manner. After all, you know that the deployments are going to be needed, or is there a concern about exposing info that isn't needed (say providing a deployment that isn't used by a particular service)?

I'm trying to think of other resources where you might want to bring in some additional data in this manner but I am struggling a bit to think of any, maybe it's best to leave AOAI as an outlayer.

timheuer commented 3 months ago

@mitchdenny could you have a WithReference overload and add them both in there, so that the reference gets the deployment automatically without more code?

mitchdenny commented 3 months ago

I think that gets messy pretty quickly. What if you have two OpenAI accounts both with the same deployment name. The variable that you inject for the deployment name now needs to include the OpenAI account resource name to make sure there isn't a conflict.

Then you have to consider the consuming side, how do they know what the configuration name is?

Lets imagine that AddDeployment didn't just flow through AzureOpenAIResource builder, and instead returned a deployment builder:

var builder = DistributedApplication.CreateBuilder(args);
var aoai = builder.AddAzureOpenAI("openai").AddDeployment("foo", ...);
var app = builder.AddProject<Projects.MyApp>("app")
                 .WithReference(aoai);

... the problem you are going to have is on the consumption side. How does the developer know what environment variable to get the deployment name out of. The AzureOpenAIClient doesn't allow you to create a client based on the deployment name, that is something you pass into chat completion options.

We've moved from a problem of the developer having to do something themselves to a developer having to guess what environment variable we stashed this key piece of data in.

The way to tackle these kinds of issues is go upstream and talk to the AI team about providing a connection string format that includes the deployment name, and having some kind of client that can have a pre-filled deployment name.

I suspect that horse has bolted though.

aaronpowell commented 1 month ago

The way to tackle these kinds of issues is go upstream and talk to the AI team about providing a connection string format that includes the deployment name, and having some kind of client that can have a pre-filled deployment name.

Problem with that is that the client is agnostic of the model, model is specified per-request, since the client is really just a typed HttpClient.