LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
Is your feature request related to a problem? Please describe.
People run untested models and report their failures. It's great that people test different models, but this project isn't able to support more than a somewhat limited amount of small models. At least not until people can change more of the internal workings using the Webinterface.
Describe the solution you'd like
Group tested / supported and other models or highlight them in some other way.
Is your feature request related to a problem? Please describe. People run untested models and report their failures. It's great that people test different models, but this project isn't able to support more than a somewhat limited amount of small models. At least not until people can change more of the internal workings using the Webinterface.
Describe the solution you'd like Group tested / supported and other models or highlight them in some other way.