JohnZolton / snorkle

100% Local Document deep search with LLMs
25 stars 4 forks source link

Snorkle.local and Ollama #3

Open logichub opened 2 months ago

logichub commented 2 months ago

Flexible Backend: While text-gen-webui is the default, Patense.local can work with any backend LLM server.

Is it possible to use Snorkle.local with Ollama? Can you provide some documentation or guide on this? Thank you.

JohnZolton commented 2 months ago

in src/server/api/routers/jobs.ts

change const webUiEndpoint = "http://127.0.0.1:5000/v1/chat/completions";

to "http://localhost:11434/api/generate"

and i think you need to include the specific model

change:

const response = await fetch(webUiEndpoint, {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          },
          body: JSON.stringify({
            messages: [
              {
                role: "user",
                content: message,
              },
            ],
          }),
        });

to

const response = await fetch(webUiEndpoint, {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          },
          body: JSON.stringify({
          "model": "llama3",
            messages: [
              {
                role: "user",
                content: message,
              },
            ],
          }),
        });
logichub commented 2 months ago

Thanks for the above code. It partially worked.

I can successfully upload documents, the search feature does not work as expected. When attempting to search for content, I received the following error message in the terminal:

❌ tRPC failed on job.deepSearch: fetch failed