withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
893 stars 86 forks source link

feat: get embedding for text #144

Closed giladgd closed 8 months ago

giladgd commented 8 months ago

Description of change

Resolves #123

How to get embedding for text

import {fileURLToPath} from "url";
import path from "path";
import {LlamaModel, LlamaEmbeddingContext} from "node-llama-cpp";

const __dirname = path.dirname(fileURLToPath(import.meta.url));

const model = new LlamaModel({
    modelPath: path.join(__dirname, "models", "functionary-small-v2.2.q4_0.gguf")
});
const embeddingContext = new LlamaEmbeddingContext({
    model,
    contextSize: Math.min(4096, model.trainContextSize)
});

const text = "Hello world";
const embedding = await embeddingContext.getEmbeddingFor(text);

console.log(text, embedding.vector);

Pull-Request Checklist

github-actions[bot] commented 8 months ago

:tada: This PR is included in version 3.0.0-beta.3 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

github-actions[bot] commented 1 week ago

:tada: This PR is included in version 3.0.0 :tada:

The release is available on:

Your semantic-release bot :package::rocket: