withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
1.02k stars 94 forks source link

Adding function-calling to an app created from the electron-typescript-react template does not work on npm run build #369

Closed chantal-rose closed 1 month ago

chantal-rose commented 1 month ago

Issue description

Function-calling does not work when i run npm run build to build the app

Expected Behavior

Able to ask the LLM questions which require function calls.

Actual Behavior

The LLM is not able to call the functions. It just gets stuck trying to generate.

Steps to reproduce

I created the project using:

npm create node-llama-cpp@latest --template electron-typescript-react

I added functions as follows in electron/state/llmState.ts:

export const functions = {
    getCurrentWeather: defineChatSessionFunction({
        description: "Get the current weather in a location",
        params: {
            type: "object",
            properties: {
                name: {
                    type: "string"
                },
            }
        },
...

I provide the LLM the functions when i call prompt as follows in electron/state/llmState.ts:

await chatSession.prompt(message, {
                    signal: promptAbortController.signal,
                    stopOnAbortSignal: true,
                    functions: functions,

The LLM is able to call the functions when I start the app using

npm start

However, when I build it using

npm run build

the LLM is not able to call the functions. It just gets stuck trying to generate.

I am using llama3.1-8b Q4

My Environment

Dependency Version
Operating System macOS
CPU Apple M3
Node.js version v20.18.0
Typescript version ^5.6.2
node-llama-cpp version 3.1.1

Additional Context

No response

Relevant Features Used

giladgd commented 1 month ago

@chantal-rose I've just tested it, and it worked fine on an M1 machine. Are you sure you opened the correct build on your machine? E.g. used the arm64 build and not the untagged x64 build? Opening a x64 build on an Apple Silicone Mac would be very slow and may not work correctly.

I've use this function implementation:

const functions = {
    getCurrentWeather: defineChatSessionFunction({
        description: "Get the current weather in a location",
        params: {
            type: "object",
            properties: {
                name: {
                    type: "string"
                }
            }
        },
        handler({name}) {
            console.log("Getting weather for", name);

            return {
                name,
                temperature: 25,
                description: "Sunny"
            };
        }
    })
};

With this model and the prompt What's the weather in London?.

I recommend reading the function calling guide to ensure you're following all the best practices to achieve the best results.

chantal-rose commented 1 month ago

Hi @giladgd. Thanks for looking into this. It turned out to be because I was using APIs that needed API keys that I had defined in a .env file in the project root directory. However, vite handles environment variables a little differently.

I fixed the issue by adding the following line to vite.config.ts to ensure that my electron app could access the API keys defined in the .env file.

envDir: path.join(__dirname),

I will close this issue.