The UniversalLLMFunctionCaller automatically invokes functions until a given goal or workflow is done. This works without the LLM natively supporting FunctionCalling (OpenAI), which therefore enables other LLM providers as Mistral, Anthropic, Meta, Google and others to also use function calling. Usage example with Mistral:
IKernelBuilder builder = Kernel.CreateBuilder();
builder.AddMistralChatCompletion(Environment.GetEnvironmentVariable("mistral_key"), "mistral-small");
var kernel = builder.Build();
kernel.ImportPluginFromType<TimePlugin>("Time");
kernel.ImportPluginFromType<MathPlugin>("Math");
UniversalLLMFunctionCaller planner = new(kernel);
string ask = "What is the current hour number, plus 5?";
Console.WriteLine(ask);
string result = await planner.RunAsync(ask);
Console.WriteLine(result);
This works with a single ask, but also supports a ChatHistory:
[...]
UniversalLLMFunctionCaller planner = new(kernel);
ChatHistory history = new ChatHistory();
history.AddUserMessage("What is the capital of france?");
history.AddAssistantMessage("Paris");
history.AddUserMessage("And of Denmark?");
result = await planner.RunAsync(history);
Here, the plugin reads the context of the ChatHistory to understand what the actual question is ("What is the capital of Denmark?") instead of just using an out-of-context message ("And of Denmark?")
According to internal tests, this Planner is both faster and more reliable compared to the Handlebar Planner, that offers a similar functionality out of the box for non-OpenAI LLMs. Tested with Mistral (nuget: JC.SemanticKernel.Connectors.Mistral). More testing with more use cases and more LLMs needed. Feel free to create issues and do pull requests.
The Plugins in the Demo and Test Project are taken from the Main Semantic Kernel Repo: https://github.com/microsoft/semantic-kernel I do not claim ownership or copyright on them.