llm-tools / embedJs

A NodeJS RAG framework to easily work with LLMs and embeddings
https://llm-tools.mintlify.app/get-started/introduction
Apache License 2.0
328 stars 39 forks source link

local path loader issue #160

Open RaghvindYadav opened 1 week ago

RaghvindYadav commented 1 week ago

I tried the following code.

 const ragApplication = await new RAGApplicationBuilder()
        .setModel(new Ollama({ modelName: llamaVersion, baseUrl: llamaHostUrl }))
        .setEmbeddingModel(new OllamaEmbeddings({ model: 'nomic-embed-text', baseUrl: llamaHostUrl }))
        .setVectorDatabase(new HNSWDb())
        .build();

    ragApplication.addLoader(new UrlLoader({url: "https://www.forbes.com/profile/elon-musk"}));

    const ragResponse = await ragApplication.query('What is the net worth of Elon Musk today?');

    console.log('response: ',ragResponse);

output:

response:  {
  id: '6b8b201d-a839-470e-b675-5603891934e8',
  timestamp: 2024-11-07T13:10:45.224Z,
  content: "I don't have the most up-to-date information on Elon Musk's current net worth. My knowledge cutoff is December 2023, and I may not have reflected any recent changes in his wealth. For the most accurate and current information, I recommend checking reputable financial news sources or Elon Musk's official social media accounts.",
  actor: 'AI',
  sources: [],
  tokenUse: { inputTokens: 'UNKNOWN', outputTokens: 'UNKNOWN' }
}

Not working with any url for me. Help me to fix this.

adhityan commented 5 days ago

Could you run the app with debug logs enabled? This will let you know why the loader did not pick up the url.

You can enable debug logs by setting the environment variable DEBUB like so -

DEBUG=embedjs:*

adhityan commented 5 days ago

Please post back here the output of the app with the debug logs enabled so we know what went.