mayooear / gpt4-pdf-chatbot-langchain

GPT4 & LangChain Chatbot for large PDF docs
https://www.youtube.com/watch?v=ih9PBGVVOO4
14.95k stars 3.02k forks source link

Add support for Pinecone Serverless #457

Closed JPaulMora closed 2 months ago

JPaulMora commented 10 months ago

Hello, I think it would be a great addition to have this new Pinecone feature added here. Serverless blog post for reference: https://www.pinecone.io/blog/serverless/

dosubot[bot] commented 10 months ago

🤖

Hello again @JPaulMora! Always a pleasure to see familiar faces around here. Let's jump right into it!

Thank you for your suggestion to add support for Pinecone Serverless to the gpt4-pdf-chatbot-langchain repository. Based on the current server architecture of the repository, it seems that Pinecone is already being used for vector storage and retrieval. However, it's not clear from the provided context how Pinecone Serverless would enhance the functionality of the repository or how it differs from the current Pinecone implementation.

The current serverless function in the repository uses a Pinecone index to create a vector store, which is then used as a retriever in the chatbot's chain of operations. If Pinecone Serverless were to be integrated, it would likely replace the current PineconeStore with a serverless variant. This could potentially affect the way the vector store is initialized and used throughout the application.

However, without specific details on how Pinecone Serverless operates or how it differs from the current PineconeStore, it's difficult to provide a more detailed analysis. Could you please provide more information about the specific features and benefits of Pinecone Serverless, and how it would enhance the functionality of the gpt4-pdf-chatbot-langchain repository? This would help in understanding the potential impact of this integration and how it could be implemented.

Here is the relevant code from the repository for reference:

const index = pinecone.Index(PINECONE_INDEX_NAME);

/* create vectorstore*/
const vectorStore = await PineconeStore.fromExistingIndex(
  new OpenAIEmbeddings({}),
  {
    pineconeIndex: index,
    textKey: 'text',
    namespace: PINECONE_NAME_SPACE, //namespace comes from your config folder
  },
);

//create chain
const chain = makeChain(retriever);

Source: pages/api/chat.ts

Looking forward to your response.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

edmundman commented 9 months ago

bump

precupstefan commented 9 months ago

@JPaulMora @edmundman

you have to bump pinecone-database to latest npm install @pinecone-database/pinecone@latest

then change in pinecone-client.ts


const pc = new Pinecone({
    apiKey: "YOUR_API_KEY",
    environment: "YOUR_ENVIRONMENT",
});

to


const pc = new Pinecone({
    apiKey: 'YOUR_API_KEY' 
});

@langchain/pinceone furthermore you have to update vector store by installing @langchain/pinceone and change in the import of store to import {PineconeStore} from "@langchain/pinecone";

JPaulMora commented 9 months ago

Thanks @precupstefan Will try this later

CarmichaelAJ commented 7 months ago

This did not work for me, I can't initialize pinecone now..

Ravel36 commented 6 months ago

Has anyone been able to get this to work? I tried what was mentioned above, but my the changes recommended doesn't match what is actually in the pinecone-client.ts file.

sulphh commented 6 months ago

Bump

Rainzo commented 5 months ago

I've managed to make it work after installing @langchain/pinceone by really making sure PineconeStore import is done like that: import {PineconeStore} from "@langchain/pinecone"

CarmichaelAJ commented 5 months ago

Would you mind pasting your pinecone-client.ts?

Im also confused to why Chat.ts has import {PineconeStore} from "@langchain/pinecone"