Atome-FE / llama-node

Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
https://llama-node.vercel.app/
Apache License 2.0
863 stars 62 forks source link

langchain integration #56

Open luca-saggese opened 1 year ago

luca-saggese commented 1 year ago

Hello, I'm trying to use the langchin integration but I cannot figure out how to use it, I'm following some examples in langchain:

import { LLM } from "llama-node";
import { LLamaRS } from "llama-node/dist/llm/llama-rs.js";
import readline from "readline";
import fs from "fs";
import path from "path";
import { SerpAPI } from 'langchain/tools';
import {initializeAgentExecutorWithOptions} from 'langchain/agents';
import { Calculator } from 'langchain/tools/calculator';
import { LLamaEmbeddings } from "llama-node/dist/extensions/langchain.js";

const SERPAPI_KEY = '';

const model = path.resolve(process.cwd(), "./ggml-vic7b-q4_1.bin"); 
const llama = new LLM(LLamaRS);
llama.load({ path: model });

const tools =[
    new SerpAPI(SERPAPI_KEY,{
        hl:'en',
        gl:'us'
    }),
    new Calculator(),
]

const executor = await initializeAgentExecutorWithOptions(tools, llama, {
    agentType: 'chat-zero-shot-react-description'
});
console.log('initialized')
const ret = await executor.call({
    input: "Who is Olivia Wilde's boyfrient? What is his age raised to the 0.23 power?"
});

console.log('ret:', ret.output);

but I got:

TypeError: this.llm.generatePrompt is not a function
    at LLMChain._call (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/llm_chain.js:80:48)
    at async LLMChain.call (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/base.js:65:28)
    at async LLMChain.predict (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/llm_chain.js:98:24)
    at async ChatAgent._plan (file:///Users/lvx/dalai/node_modules/langchain/dist/agents/agent.js:197:24)
    at async AgentExecutor._call (file:///Users/lvx/dalai/node_modules/langchain/dist/agents/executor.js:82:28)
    at async AgentExecutor.call (file:///Users/lvx/dalai/node_modules/langchain/dist/chains/base.js:65:28)
    at async file:///Users/lvx/dalai/agent.js:35:13

i understand that is because the LLM model does not have this function is there any method to call it or do I have to create a translation class?

hlhr202 commented 1 year ago

by now you have to manually adapt generate function to langchain.

matthoffner commented 1 year ago

I'm not as familiar with gpt4all, but noticed they are adding langchain support, maybe there is some overlap: https://github.com/hwchase17/langchainjs/pull/1204