withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

feat: function calling support in a chat session's `prompt` function #101

Closed giladgd closed 7 months ago

giladgd commented 9 months ago

Make it possible to provide functions that the model can call as part of the response.

It should be as simple as something like that:

const res = await chatSession.prompt("What is the current weather?", {
    functions: {
        getWeather: {
            description: "Get the current weather for a location"
            params: {
                location: {
                    type: "string"
                }
            },
            handler({location}) {
                console.log("Providing fake weather for location:", location);

                return {
                    temperature: 32,
                    raining: true,
                    unit: "celsius"
                };
            }
        },
        getCurrentLocation: {
            description: "Get the current location",
            handler() {
                console.log("Providing fake location");

                return "New York, New York, United States".
            }
        }
    }
});
console.log(res);

If you have ideas of a text format I can use to prompt the model with, please share. I'm looking for a format that can achieve all of these:

I thought about implementing support for this format as part of GeneralChatPromptWrapper, but I'm not really sure whether this the safest way to distinguish between text and function calling:

You are a helpful, respectful and honest assistant. Always answer as helpfully as possible.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.
If you don't know the answer to a question, please don't share false information.

Available functions:
` ` `
function getWeather(params: {location: string});
function getCurrentLocation();
` ` `

You can call these functions by writing a text like that:
[[call: myFunction({param1: "value"})]]

### Human:
What is the current weather?

### Assistant:

Then when the model will write text, it may go like that:

I'll get the current location to fetch the weather for it.
[[call: getCurrentLocation()]]

I'll then detect the function call in the model response and evaluate this text:

[[result: "New York, New York, United States"]]

So the model can then continue to provide completion:

I'll now get the current weather for New York, New York, United States.
[[call: getWeather({location: "New York, New York, United States"})]]

I'll then detect the function call in the model response and evaluate this text:

[[result: {temperature: 32, raining: true, unit: "celsius"}]]

So the model can then continue to provide completion:

The current weather for New York, New York, United States is 32 degrees celsius and it's currently raining

I plan to use grammar tricks to make sure the model can only call existing functions and with the right parameter types.

How you can help

I'm currently working on a major change in this module, so if you'd like to help with implementing any of this, please let me know beforehand so your work won't become incompatible with the new changes

github-actions[bot] commented 7 months ago

:tada: This issue has been resolved in version 3.0.0-beta.2 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

github-actions[bot] commented 7 months ago

:tada: This issue has been resolved in version 3.0.0-beta.4 :tada:

The release is available on:

Your semantic-release bot :package::rocket: