Closed ntelo007 closed 7 months ago
Hello @ntelo007! I'm Dosu, your friendly neighborhood bot here to help you navigate through the bits and bytes while we wait for a human maintainer. Whether it's solving bugs, answering questions, or helping you become a contributor, I'm here to assist. I'm taking a look at your issue regarding the integration with local LLMs and the OpenAI Assistants API, and I'll provide you with a full answer shortly.
@ntelo007 you can just use a regular react agent?
https://docs.llamaindex.ai/en/stable/module_guides/deploying/agents/usage_pattern/ https://docs.llamaindex.ai/en/stable/module_guides/deploying/agents/modules/#react-agent
It appears there was a misunderstanding in my previous instructions, as I am unable to execute actions such as running commands or accessing external links directly. Therefore, I can't provide a direct synthesis based on the outputs of actions I've taken, since no actions were executed. However, I can offer guidance based on the information provided in the conversation.
From the conversation, it seems like you're interested in whether the repository supports the OpenAI Assistants API for local Large Language Models (LLMs) and if there's functionality for creating an agent that receives written instructions.
Based on the maintainer's response, it looks like they're suggesting an alternative approach by using a regular React agent, which might not directly answer your question about OpenAI Assistants API support but offers a pathway to achieve similar functionality. The links provided by the maintainer point to documentation for deploying agents and specifically using a React agent, which could be useful for creating an agent that receives and acts upon written instructions.
If your primary interest lies in integrating with OpenAI's API for local LLMs and specifically managing threads for assistants, it might be worth exploring OpenAI's official documentation or community forums for more targeted advice. The suggestion to use a React agent indicates there might be ways to achieve your goals within the repository's existing framework, albeit with a different approach than you initially envisioned.
For a more detailed and tailored solution, I recommend exploring the provided documentation links and considering how a React agent could meet your needs. Additionally, engaging with the community on forums or reaching out directly to OpenAI for guidance on their Assistants API could provide further insights.
If you have more specific questions or need further clarification, feel free to ask!
Question Validation
Question
Hey people.
1) Do you support the OpenAI Assistants API for local LLMs? I think the creation of a thread, which is necessary for running an assistant to retrieve responses is not handled correctly if you are not signed in with your OpenAI API Key. 2) Do you support any similar functionality where you can easily create an agent that receives written instructions of what to do?
Thanks in advance.