dotnet-smartcomponents / smartcomponents

Experimental, end-to-end AI features for .NET apps
694 stars 61 forks source link

Allowing WebAssembly to store OpenAI key in the WebBrowsers Local Storage #7

Closed ADefWebserver closed 6 months ago

ADefWebserver commented 6 months ago

In the docs it says "you cannot use a Blazor WebAssembly Standalone App hosted on a static file server. This is purely because you need a server to hold your API keys securely." However in my pure WebAssembly application I have the end user enter and store their OpenAI key in the web browser's LocalStorage (I then use that key in the OpenAI calls).

Therefore if you allow me to set the OpenAI key right before I make the OpenAI API calls there is no risk. I would love to make a PR but the source code is not available?

ADefWebserver commented 6 months ago

I tried to hack into the code but attached is as far as I got :( SmartComponentsWebAssembly.zip cc: @danroth27

SteveSandersonMS commented 6 months ago

Strictly speaking you can already define your own IInferenceBackend (e.g., as mentioned at https://github.com/dotnet-smartcomponents/smartcomponents/blob/main/docs/smart-paste.md#customizing-the-language-model-backend) and then wire this up to calling OpenAI or Azure OpenAI or any other service in any way you like (with any keys you like). As such I don't think it makes any difference where you choose to store the API keys, and if you want to let users put their own keys into localStorage, you can do so.

However a further challenge you'll face is that the JS code is set up to make requests to the inference endpoints as HTTP requests. That will be awkward if you don't have any server to call. In theory I think you could define a JS service worker that will receive these calls, and then forward them back into .NET via Blazor WebAssembly JS interop, and from there make calls into whatever inference DI services are registered (e.g., SmartTextAreaInference, SmartPasteInference).

So, as far as I'm aware, it's technically possible albeit tricky. If it becomes a common request to have some JS-side abstraction over the inference calls, we could definitely consider that. For now I'd recommend the service worker technique.

ADefWebserver commented 6 months ago

@SteveSandersonMS - Thank you so much for taking the time for such a detailed response :)

Also this is how I defined my own IInferenceBackend. This actually worked in my sample. I created this class:

#nullable disable
using System.Reflection;
using SmartComponents.Inference.OpenAI;

public static class DynamicConfiguration
{
    public static void DynamicConfigurationUtil(this IConfigurationBuilder configuration)
    {
        configuration.AddInMemoryCollection(new Dictionary<string, string>
        {
            ["SmartComponents:DeploymentName"] = "gpt-3.5-turbo",
            ["SmartComponents:ApiKey"] = "{{ Your Open AI Key *Get This from somewhere safe like Web Browser LocalStorage* }}"
        });
        return;
    }

    public static Exception GetConfigError(IConfiguration config)
    {
        var apiConfigType = typeof(OpenAIInferenceBackend).Assembly
            .GetType("SmartComponents.Inference.OpenAI.ApiConfig", true)!;
        try
        {
            _ = Activator.CreateInstance(apiConfigType, config);
        }
        catch (TargetInvocationException ex) when (ex.InnerException is not null)
        {
            return ex.InnerException;
        }
        catch (Exception ex)
        {
            return ex;
        }

        return null;
    }
}

Then called it from Program.cs like this:

builder.Configuration.DynamicConfigurationUtil();

SteveSandersonMS commented 6 months ago

Glad to hear this approach has worked for you!