andrewnguonly / Lumos

A RAG LLM co-pilot for browsing the web, powered by local LLMs
MIT License
1.42k stars 102 forks source link

OPENAI embeddings API Integration #177

Closed arhaang13 closed 6 months ago

arhaang13 commented 6 months ago

Can you please help me navigating how to change and experiment the same application with the openAI embeddings API.

andrewnguonly commented 6 months ago

You'll have to update the scripts/background.ts file.

  1. Install @langchain/openai: npm install @langchain/openai
  2. Import OpenAIEmbeddings. See docs.
    import { OpenAIEmbeddings } from "@langchain/openai";
  3. Replace OllamaEmbeddings instance with OpenAIEmbeddings instance.
    // load documents into vector store
    vectorStore = new EnhancedMemoryVectorStore(
    new OpenAIEmbeddings({
    apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
    batchSize: 512, // Default value if omitted is 512. Max is 2048
    model: "text-embedding-3-large",
    });
    );
  4. Rebuild the application: npm run build.

That's it! Let me know if that helps.

arhaang13 commented 6 months ago

Thank you so much, this helped me a lot!

In addition, I would like to ask if there is a way that I could integrate and call the openAI API instead of using Ollama models with Lumos itself because using the Ollama models on my local system is making my system way slower, so I would like to use the complete RAG using openAI option.

Kindly help me out with the same.

Thank you,

andrewnguonly commented 6 months ago

if there is a way that I could integrate and call the openAI API

Again, you'll have to update the scripts/background.ts file.

  1. Install @langchain/openai: npm install @langchain/openai
  2. Import ChatOpenAI and OpenAI.
import { ChatOpenAI } from "@langchain/openai";
import { OpenAI } from "@langchain/openai";
  1. Replace ChatOllama instance with ChatOpenAI instance.
    const getChatModel = (options: LumosOptions): Runnable => {
    return new ChatOpenAI({
    apiKey: "YOUR-API-KEY",
    callbacks: [new ConsoleCallbackHandler()],
    }).bind({
    signal: controller.signal,
    });
    };
  2. Replace Ollama instance with OpenAI instance.

    const classifyPrompt = async (
    options: LumosOptions,
    type: string,
    originalPrompt: string,
    classifcationPrompt: string,
    prefixTrigger?: string,
    ): Promise<boolean> => {
    ...
    
    // otherwise, attempt to classify prompt
    const openai = new OpenAI({
    apiKey: "YOUR-API-KEY",
    temperature: 0,
    stop: [".", ","],
    }).bind({
    signal: controller.signal,
    });
    
    ...
    };
  3. Rebuild the application: npm run build.
andrewnguonly commented 6 months ago

Closing this issue for now. @arhaang13, let me know if you need more help.