microsoft / kernel-memory

RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.
https://microsoft.github.io/kernel-memory
MIT License
1.58k stars 305 forks source link

SM like SK should be AI service agnostic #101

Closed geffzhang closed 10 months ago

geffzhang commented 1 year ago

Semantic Kernel 1.0 is AI service agnostic. Semantic kernel now only support azure openai/openai , We should make the Semantic memory AI provider agnostic.

dluc commented 1 year ago

hi @geffzhang I think the solution is already provider agnostic, isn't it? SearchClient depends on ITextEmbeddingGeneration and ITextGeneration so you should be able to leverage any custom class implementing those interfaces, talking to any LLM

geffzhang commented 1 year ago

thanks, I see repo only support openai/azure openai,I tried to support llamasharp these two days and it is indeed service agnostic.

but how to support config service in SemanticMemoryConfig: "SemanticMemory": { …… // - AI completion and embedding configuration for LLama2 // - TextModel is a completion model (e.g., local-llama-chat). // - EmbeddingModelSet is an embedding model (e.g., "local-llama-embed"). // - ModelPath is LLama 2 gguf model path // - GpuLayerCont "LLama": { "ModelPath": "C:\Users\zsygz\Documents\GitHub\LLamaSharp\LLama.Unittest\Models\llama-2-7b-chat.Q4_0.gguf", "ContextSize": 1024, "GpuLayerCount": 50, "Seed": 1337, "TextModel": "local-llama-chat", "EmbeddingModel": "local-llama-embed" } …… }

config.DataIngestion.EmbeddingGeneratorTypes

QdrantConfig should has vectorsize , 1536 is openai embedding vectorsize

xbotter commented 1 year ago

hi @geffzhang I think the solution is already provider agnostic, isn't it? SearchClient depends on ITextEmbeddingGeneration and ITextGeneration so you should be able to leverage any custom class implementing those interfaces, talking to any LLM

Can we extract the ITextEmbeddingGeneration and ITextGeneration interfaces into a separate Nuget package so that AI providers can implement them on their own?

dluc commented 1 year ago

@geffzhang the integration with llamasharp should look something like this:

using Microsoft.SemanticMemory;
using Microsoft.SemanticMemory.AI;
using Microsoft.SemanticMemory.MemoryStorage.Qdrant;

public class Program
{
    public static void Main()
    {
        var llamaConfig = new LlamaConfig
        {
            // ...
        };

        var openAIConfig = new OpenAIConfig
        {
            EmbeddingModel = "text-embedding-ada-002",
            APIKey = Env.Var("OPENAI_API_KEY")
        };

        var memory = new MemoryClientBuilder()
            .WithCustomTextGeneration(new LlamaTextGeneration(llamaConfig))
            .WithOpenAITextEmbedding(openAIConfig)
            .WithQdrant(new QdrantConfig { /* ... */ });

        // ...
    }
}

public class LlamaConfig
{
    // ...
}

public class LlamaTextGeneration : ITextGeneration
{
    private readonly LlamaConfig _config;

    public LlamaTextGeneration(LlamaConfig config)
    {
        this._config = config;
    }

    public IAsyncEnumerable<string> GenerateTextAsync(
        string prompt,
        TextGenerationOptions options,
        CancellationToken cancellationToken = new())
    {
        // ...
    }
}

The vector size is handled automatically, as long as it is consistent across executions.

The code above is using:

    <PackageReference Include="LLamaSharp" Version="0.5.1"/>
    <PackageReference Include="Microsoft.SemanticMemory.Core" Version="0.3.231009.6-preview"/>

@xbotter it should be possible to use custom logic with the existing nuget.

AsakusaRinne commented 12 months ago

Thanks to the great work from @xbotter, LLamaSharp is about to have an integration for kernel-memory. I'll appreciate it if someone of KM developers could help to review this PR.

dluc commented 10 months ago

@xbotter

Can we extract the ITextEmbeddingGeneration and ITextGeneration interfaces into a separate Nuget package so that AI providers can implement them on their own?

done 👍 see v0.18 / PR #189

dluc commented 10 months ago

@geffzhang I think this is now solved. The solution allows to customize text generation, embedding generation, tokenization and RAG parameters such as how many tokens can be used. Summarization also takes into account the model characteristics. As far as possible the code will also log errors or throw exceptions if some value is incorrect, e.g. trying to run an 8000 tokens prompt with a model that supports only 4096 tokens, or trying to generate embedding for a string 4000 tokens long with a model that supports only 2000 tokens, etc.