llm-tools / embedJs

A NodeJS RAG framework to easily work with LLMs and embeddings
https://llm-tools.mintlify.app/get-started/introduction
Apache License 2.0
328 stars 39 forks source link

Receiving Conversation error #161

Open leVoT8 opened 1 week ago

leVoT8 commented 1 week ago

🐛 Describe the bug

My current code:

import { RAGApplicationBuilder, LocalPathLoader } from '@llm-tools/embedjs';
import { OpenAiEmbeddings } from '@llm-tools/embedjs-openai';
import { WebLoader } from '@llm-tools/embedjs-loader-web';
import { PineconeDb } from '@llm-tools/embedjs-pinecone'
import { OpenAi } from '@llm-tools/embedjs-openai'
import { PdfLoader } from '@llm-tools/embedjs-loader-pdf';

//Replace this with your OpenAI key
process.env.OPENAI_API_KEY = "<xxx>"
process.env.PINECONE_API_KEY= "<xxx>"

const ragApplication = await new RAGApplicationBuilder()
.setModel(new OpenAi({model: "gpt-4o-mini",}))
.setEmbeddingModel(new OpenAiEmbeddings())
.setVectorDatabase(new PineconeDb({
    projectName: 'medicalinfo',
    namespace: 'ns1',
    indexSpec: {
        serverless: {
            cloud: 'aws',
            environment: 'us-east-1'
        },
    },
}))
.setSystemMessage("Only include information provided to you, do not make up answers. If the information is not available, state that you do not know.")
.build();

// Below is commented out because it was previously loaded and upserted into Pinecone
// await ragApplication.addLoader(new LocalPathLoader({ path: './knowledge/Current' }))

const res = await ragApplication.query('What is the current treatment for pneumonia?')
console.log(res)

The error I receive:

.../node_modules/@llm-tools/embedjs-interfaces/src/interfaces/base-model.js:55
        const conversation = await BaseModel.cache.getConversation(conversationId);
                                                   ^
TypeError: Cannot read properties of undefined (reading 'getConversation')
    at OpenAi.query (file:///.../node_modules/@llm-tools/embedjs-interfaces/src/interfaces/base-model.js:55:52)
    at RAGApplication.query (file:.../node_modules/@llm-tools/embedjs/src/core/rag-application.js:347:27)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file:///.../embedJS/index.js:34:13

Node.js v18.17.0

Am I doing something wrong, or is this a bug? How do I fix it?

adhityan commented 2 days ago

I see a few things here. The first, the initialization of pinecone seems incorrect. This is the never syntax (using region instead of environment) -

const ragApplication = await new RAGApplicationBuilder()
    .setModel(new OpenAi({ model: 'gpt-4o-mini' }))
    .setEmbeddingModel(new OpenAiEmbeddings())
    .setVectorDatabase(
        new PineconeDb({
            projectName: 'test',
            namespace: 'dev',
            indexSpec: {
                serverless: {
                    cloud: 'aws',
                    region: 'us-east-1',
                },
            },
        }),
    )
    .setSystemMessage(
        'Only include information provided to you, do not make up answers. If the information is not available, state that you do not know.',
    )
    .build();

I think it is likely you are using an older version of embedJs which used an older version of Pinecone API. Could you update all embedjs libraries (npm packages) to version 0.1.18 and let me know if you still see this error?