LLocalSearch is a wrapper around locally running Large Language Models
(like ChatGTP, but a lot smaller and less "smart") which allows them to choose from a set of tools. These tools allow them to search the internet for current information about your question. This process is recursive, which means, that the running LLM can freely choose to use tools (even multiple times) based on the information its getting from you and other tool calls.
xy
?The long term plan, which OpenAI is selling to big media houses:
Additionally, members of the program receive priority placement and “richer brand expression” in chat conversations, and their content benefits from more prominent link treatments.
If you dislike the idea of getting manipulated by the highest bidder, you might want to try some less discriminatory alternatives, like this project.
The langchain library im using does not respect the LLama3 stop words, which results in LLama3 starting to hallucinate at the end of a turn. I have a working patch (checkout the experiments branch), but since im unsure if my way is the right way to solve this, im still waiting for a response from the langchaingo team.
An Interface overhaul, allowing for more flexible panels and more efficient use of space. Inspired by the current layout of Obsidian
Still needs a lot of work, like refactoring a lot of the internal data structures to allow for more better and more flexible ways to expand the functionality in the future without having to rewrite the whole data transmission and interface part again.
Groundwork for private information inside the rag chain, like uploading your own documents, or connecting LLocalSearch to services like Google Drive, or Confluence.
Not sure if there is a right way to implement this, but provide the main agent chain with information about the user, like preferences and having an extra Vector DB Namespace per user for persistent information.
git@github.com:nilsherzig/LLocalSearch.git
cd LLocalSearch
.env
file, if you need to change some of the default settings. This is typically only needed if you have Ollama running on a different device or if you want to build a more complex setup (for more than your personal use f.ex.). Please read Ollama Setup Guide if you struggle to get the Ollama connection running.touch .env
code .env # open file with vscode
nvim .env # open file with neovim
docker-compose up -d