Jenscaasen / UniversalLLMFunctionCaller

A planner that integrates into Semantic Kernel to enable function calling on all Chat based LLMs (Mistral, Bard, Claude, LLama etc)
26 stars 2 forks source link

Compatibility with Ollama #1

Open frikimanHD opened 1 month ago

frikimanHD commented 1 month ago

Hello. I'm working on a project that uses the ollama service to run the mistral 8x7b model. I try to make it run a simple kernel function to return the current date and time but I get this exception:

System.Exception: The LLM is not compatible with this approach.
   at JC.SemanticKernel.Planners.UniversalLLMFunctionCaller.UniversalLLMFunctionCaller.RunAsync(String task)
   at JC.SemanticKernel.Planners.UniversalLLMFunctionCaller.UniversalLLMFunctionCaller.RunAsync(ChatHistory askHistory)
   at SemanticKernelApp.SemanticKernelApp.Main(String[] args) in C:\Users\pgimeno\source\repos\SemanticKernelApp\Program.cs:line 44

This is the code of the project i'm working on:

using JC.SemanticKernel.Planners.UniversalLLMFunctionCaller;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;

namespace SemanticKernelApp
{
    class SemanticKernelApp
    {
        static async Task Main(string[] args)
        {
#pragma warning disable SKEXP0010
#pragma warning disable SKEXP0060
            var endpoint = new Uri("http://192.168.1.18:42069");
            var modelId = "mistral:latest";
            bool acabat = false;
            HttpClient client = new HttpClient();
            client.Timeout = TimeSpan.FromDays(5);
            var kernelBuilder = Kernel.CreateBuilder().AddOpenAIChatCompletion(modelId: modelId, apiKey: null, endpoint: endpoint, httpClient: client);

            var kernel = kernelBuilder.Build();
            kernel.Plugins.AddFromType<CustomPlugin>("CustomPlugin");
            Console.WriteLine("Type \"$leave\" to leave");
            var chatCompletion = kernel.GetRequiredService<IChatCompletionService>();
            var chat = new ChatHistory();
            UniversalLLMFunctionCaller planner = new(kernel);

            OpenAIPromptExecutionSettings settings = new() { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions };

            while (!acabat)
            {
                Console.Write("\nUser: ");
                var userInput = Console.ReadLine();

                if (userInput != "$leave")
                {
                    try
                    {

                        chat.AddUserMessage(userInput);
                        var bot_answer = await planner.RunAsync(chat);
                        Console.Write($"\nAI: {bot_answer.ToString()}\n");
                        chat.AddAssistantMessage(bot_answer.ToString());

                    }
                    catch (Exception e)
                    {
                        Console.WriteLine(e.ToString());
                        Console.ReadLine();
                    }

                }
                else
                {
                    acabat = true;
                }

            }

        }

    }
}

It would be very helpful to know if the issue is in my code or if the function caller is just not compatible with ollama. Thank you in advance.

Jenscaasen commented 1 month ago

Hey there, i have not tested it with olama, but looking at your code i see that you are using the OpenAI connector. OpenAI and Mistral share a lot of similarities in their API, but have some detailed differences. Please try to use the Mistral connector. Microsoft now added an official mistral connector to SK, so please don't use mine. Theirs is under maintenance development, mine is abandoned.

d3-eugene-titov commented 3 weeks ago

Its working perfect with latest ollama