Closed geffzhang closed 10 months ago
hi @geffzhang I think the solution is already provider agnostic, isn't it? SearchClient depends on ITextEmbeddingGeneration
and ITextGeneration
so you should be able to leverage any custom class implementing those interfaces, talking to any LLM
thanks, I see repo only support openai/azure openai,I tried to support llamasharp these two days and it is indeed service agnostic.
but how to support config service in SemanticMemoryConfig: "SemanticMemory": { …… // - AI completion and embedding configuration for LLama2 // - TextModel is a completion model (e.g., local-llama-chat). // - EmbeddingModelSet is an embedding model (e.g., "local-llama-embed"). // - ModelPath is LLama 2 gguf model path // - GpuLayerCont "LLama": { "ModelPath": "C:\Users\zsygz\Documents\GitHub\LLamaSharp\LLama.Unittest\Models\llama-2-7b-chat.Q4_0.gguf", "ContextSize": 1024, "GpuLayerCount": 50, "Seed": 1337, "TextModel": "local-llama-chat", "EmbeddingModel": "local-llama-embed" } …… }
config.DataIngestion.EmbeddingGeneratorTypes
QdrantConfig should has vectorsize , 1536 is openai embedding vectorsize
hi @geffzhang I think the solution is already provider agnostic, isn't it? SearchClient depends on
ITextEmbeddingGeneration
andITextGeneration
so you should be able to leverage any custom class implementing those interfaces, talking to any LLM
Can we extract the ITextEmbeddingGeneration and ITextGeneration interfaces into a separate Nuget package so that AI providers can implement them on their own?
@geffzhang the integration with llamasharp should look something like this:
using Microsoft.SemanticMemory;
using Microsoft.SemanticMemory.AI;
using Microsoft.SemanticMemory.MemoryStorage.Qdrant;
public class Program
{
public static void Main()
{
var llamaConfig = new LlamaConfig
{
// ...
};
var openAIConfig = new OpenAIConfig
{
EmbeddingModel = "text-embedding-ada-002",
APIKey = Env.Var("OPENAI_API_KEY")
};
var memory = new MemoryClientBuilder()
.WithCustomTextGeneration(new LlamaTextGeneration(llamaConfig))
.WithOpenAITextEmbedding(openAIConfig)
.WithQdrant(new QdrantConfig { /* ... */ });
// ...
}
}
public class LlamaConfig
{
// ...
}
public class LlamaTextGeneration : ITextGeneration
{
private readonly LlamaConfig _config;
public LlamaTextGeneration(LlamaConfig config)
{
this._config = config;
}
public IAsyncEnumerable<string> GenerateTextAsync(
string prompt,
TextGenerationOptions options,
CancellationToken cancellationToken = new())
{
// ...
}
}
The vector size is handled automatically, as long as it is consistent across executions.
The code above is using:
<PackageReference Include="LLamaSharp" Version="0.5.1"/>
<PackageReference Include="Microsoft.SemanticMemory.Core" Version="0.3.231009.6-preview"/>
@xbotter it should be possible to use custom logic with the existing nuget.
Thanks to the great work from @xbotter, LLamaSharp is about to have an integration for kernel-memory. I'll appreciate it if someone of KM developers could help to review this PR.
@xbotter
Can we extract the ITextEmbeddingGeneration and ITextGeneration interfaces into a separate Nuget package so that AI providers can implement them on their own?
done 👍 see v0.18 / PR #189
@geffzhang I think this is now solved. The solution allows to customize text generation, embedding generation, tokenization and RAG parameters such as how many tokens can be used. Summarization also takes into account the model characteristics. As far as possible the code will also log errors or throw exceptions if some value is incorrect, e.g. trying to run an 8000 tokens prompt with a model that supports only 4096 tokens, or trying to generate embedding for a string 4000 tokens long with a model that supports only 2000 tokens, etc.
Semantic Kernel 1.0 is AI service agnostic. Semantic kernel now only support azure openai/openai , We should make the Semantic memory AI provider agnostic.