Closed DementedWeasel1971 closed 6 months ago
@afourney
Looks like an issue related to support for assistant API. @gagb @sidhujag
@IANTHEREAL fyi
The problem is caused by the llm_config in manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
, you can try to remove assistant_id from chat manager's configuration
If I follow your advice and remove the assistant_id, how do I activate an already existing agent in my openai account. The assistant_id is used as reference to an already configured agent that I have in openai. It works if I have a single chat between agents. I have full use of code interpreter as well as any knowledge or functions that I have on the openai platform. So I can configure and tweak an agent once and feed it with specific knowledge once and then call it.
This works like a charm, except for the group chat. If it worked as well in the group chat as it is working on the single chat, it opens up huge potential, as not only do I have custom agents, I have remote compute (code interpreters for each agent), without needing to config this at "run time". The issue is specific to group chat, works fantastic on everything else, but I do not want to use networkx to orchastrate engagement between agents, group chat on autogen works brilliant on agents created locally. You might think of a code change in the group chat or llm_config to add the agent names, as I suspect that is what you are using.
If this one works or I find a workarround, for me this would be a huge thing. Group Chats between existing super agents.
If you add to this a later feature which is to manage the thread allocation, autogen would be on its way to enable building a "wisdom enabler". (But I do not want to digress), I would want to have existing agents enabled in a group chat while they have access to all their knowledge, functions and preset instructions. The way it works in a 1:1 chat with an existing agent. But in the group chat.
If I remove the assistant_id I remove the ability to reference an existing agent, or is there another way to reference, if so, I will gladly test.
I'm a bit confused, have you encountered a new error? The previous issue was in the chat manager's function, not the GPT assistant agent. You just need to remove the assistant id from the chat manager's llm config. The rest of the GPT assistant agent's configuration, including the assistant id, can remain unchanged. chat manager doesn't need assistant id, right? @DementedWeasel1971
Confusion might be on my side. @IANTHEREAL when you said remove assistant id from the chat manager's llm config, I took it to imply that it should not be there to start off with. I will test and revert.
I hit this and found if i pop the assistant_id in the gpt assistant it fixes it.
I hit this and found if i pop the assistant_id in the gpt assistant it fixes it.
Please share code. Would love to find the work around. If I can get the existing agents to work in a group chat, Wow, then each's specialisation can be used to in theory create a better quality output or even output in the context of a company's existing source code, which I could have uploaded as knowledge.
@DementedWeasel1971 you need to use different llm_config for group chat manager and gpt assistant. The former does not need assistant id and will complain. And the latter can use one. Hope this helps!
Describe the bug
When assistant already exists and is instantiated using code such as:
Then in GroupChat the following error is shown:
This only happens when I want to re-use agents that already exists and is called via the following example code:
It seems to be specific to when the existing agent is referenced.
Steps to reproduce
Expected Behavior
Chat should have started as is currently working with group chat or agents which is created as part of the process.
Screenshots and logs
No response
Additional Information
Win 10 Pro Current version Python 9.10