Open danny-avila opened 1 month ago
Regarding the modular system, it would be nice to be able to put all assistant tool logic into a single folder, accompanied by a JSON manifest. This would facilitate the creation of a librechat agents store, where each agent/assistant such as WeatherAssistant/
, can be neatly organized and managed.
I want to say is not agent, but prompt, I want to say, can prompt management consult https://github.com/lobehub/lobe-chat, lobe on the do is better, and built a lot of useful prompt. Also, librechat's prompt seems to be sent via chat rather than system prompt, hopefully supporting system prompt
I would greatly prefer a way to use tools that is not tied to the assistant concept. Most LLMs today support tools as part of the interface, so I would prefer the baseline tool support to be in the preset. I really don't need more than a way to provide a JSON manifest for each tool in preset and then to be able to implement the callback myself - either by supplying a url to call or the actual server-side implementation.
Assistants are fine, but I think they're an additional abstraction on top and one of the things I love about LibreChat is that it doesn't force me to use abstractions beyond those in the baseline LLM API contracts.
I would greatly prefer a way to use tools that is not tied to the assistant concept. Most LLMs today support tools as part of the interface, so I would prefer the baseline tool support to be in the preset. I really don't need more than a way to provide a JSON manifest for each tool in preset and then to be able to implement the callback myself - either by supplying a url to call or the actual server-side implementation.
Assistants are fine, but I think they're an additional abstraction on top and one of the things I love about LibreChat is that it doesn't force me to use abstractions beyond those in the baseline LLM API contracts.
Thanks for your comment! This will be a successor to the plugins endpoint, which does not have that additional abstraction you mention. I want to allow “tools” to be toggled on and off per “run”, on the fly, as before.
Though it’s worth mentioning, this is an essence an agent without an association to the agent data structure and Plugins also functions like this under the hood. As soon as the LLM has “agency” of tools it crosses that territory.
I want to say is not agent, but prompt, I want to say, can prompt management consult https://github.com/lobehub/lobe-chat, lobe on the do is better, and built a lot of useful prompt. Also, librechat's prompt seems to be sent via chat rather than system prompt, hopefully supporting system prompt
Since day one, LibreChat has let users work with system prompts using presets, right after the OpenAI API launched in March or April 2023. With our upcoming update, we want to make this feature even more noticeable and easier to use. Also, think of "Agent" as a more straightforward way to describe this idea, since it basically uses "instructions" that act like a system prompt
Is there planned integration of agents with locally hosted LLM's with platforms such as ollama?
Is there planned integration of agents with locally hosted LLM's with platforms such as ollama?
Yes! All ollama models that support tool-calling
What features would you like to see added?
Open-source alternative to Assistants API, as a successor to the "Plugins" endpoint, with support for Mistral, AWS Bedrock, Anthropic, OpenAI, Azure OpenAI Services, and more.
More details
Tweet announcing this: https://twitter.com/LibreChatAI/status/1821195627830599895
Which components are impacted by your request?
Frontend/backend
Code of Conduct