LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
Apache License 2.0
5.67k
stars
362
forks
source link
limit db result size to not "overflow" context #91
Is your feature request related to a problem? Please describe.
Sometimes we get more result text back than there is model context, especially if the context is set to a low number. This results in the llm not adhering to the requested answer structure. Which results in parsing errors.
Describe the solution you'd like
Limit the amount of context we use with db / websearch results.
Is your feature request related to a problem? Please describe. Sometimes we get more result text back than there is model context, especially if the context is set to a low number. This results in the llm not adhering to the requested answer structure. Which results in parsing errors.
Describe the solution you'd like Limit the amount of context we use with db / websearch results.