SciSharp / LLamaSharp

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
https://scisharp.github.io/LLamaSharp
MIT License
2.24k stars 298 forks source link

[Feature]: SemanticKernel FuctionCall #758

Open justmine66 opened 1 month ago

justmine66 commented 1 month ago

Background & Description

from https://github.com/kosimas/LLamaSharp/blob/master/LLama.Examples/Examples/SemanticKernelFunctionCalling.cs ,but unable to find attribute: LLamaSharpPromptExecutionSettings llamaSettings = new () { AutoInvoke = true, };

API & Usage

No response

How to implement

No response

zsogitbe commented 1 month ago

This is not part of the official LLamaSharp codebase. You need to use the LLamaSharp from 'kosimas' until he files a PR and it is merged into the official LLamaSharp code.

justmine66 commented 1 month ago

This is not part of the official LLamaSharp codebase. You need to use the LLamaSharp from 'kosimas' until he files a PR and it is merged into the official LLamaSharp code.

thanks,I am very much looking forward to official support.

zsogitbe commented 1 month ago

You are welcome.

Please note that you can do the same, but better with the Handlebars planner in Semantic Kernel. Plugins/functions in a handlebars plan are executed automatically. This 'AutoInvoke' function feature was first introduced by OpenAI and it relies on a specialized model which can generate the function calling syntax. You can execute a handlebars plan with any model.