Closed LUK3ARK closed 9 months ago
update: from testing, this seems to only occur if there is more than one user proxy in a group. When i have only one user proxy in a groupchat, function calling works correctly.
@LUK3ARK I tried the sample notebook, it didn't raise any issue. https://github.com/microsoft/autogen/blob/fix_1440/notebook/agentchat_groupchat_RAG.ipynb
Could you share your notebook? Thanks.
When i run the agentchat_groupchat_RAG.ipynb example it suggests to call the retrieve function and then fails with this issue:
here is the script i run:
config_list = autogen.config_list_from_json(
env_or_file="OAI_CONFIG_LIST",
filter_dict={
"model": ["gpt-3.5-turbo", "gpt-35-turbo", "gpt-3.5-turbo-0613", "gpt-4", "gpt4", "gpt-4-32k"],
},
)
if __name__ == "__main__":
llm_config = {
"timeout": 60,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
# autogen.ChatCompletion.start_logging()
def termination_msg(x):
return isinstance(x, dict) and "TERMINATE" == str(x.get("content", ""))[-9:].upper()
boss = autogen.UserProxyAgent(
name="Boss",
is_termination_msg=termination_msg,
human_input_mode="NEVER",
system_message="The boss who ask questions and give tasks.",
code_execution_config=False, # we don't want to execute code in this case.
default_auto_reply="Reply `TERMINATE` if the task is done.",
)
boss_aid = RetrieveUserProxyAgent(
name="Boss_Assistant",
is_termination_msg=termination_msg,
system_message="Assistant who has extra content retrieval power for solving difficult problems.",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
retrieve_config={
"task": "code",
"docs_path": "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md",
"chunk_token_size": 1000,
"model": config_list[0]["model"],
"client": chromadb.PersistentClient(path="/tmp/chromadb"),
"collection_name": "groupchat",
"get_or_create": True,
},
code_execution_config=False, # we don't want to execute code in this case.
)
coder = autogen.AssistantAgent(
name="Senior_Python_Engineer",
is_termination_msg=termination_msg,
system_message="You are a senior python engineer. Reply `TERMINATE` in the end when everything is done.",
llm_config=llm_config,
)
pm = autogen.AssistantAgent(
name="Product_Manager",
is_termination_msg=termination_msg,
system_message="You are a product manager. Reply `TERMINATE` in the end when everything is done.",
llm_config=llm_config,
)
reviewer = autogen.AssistantAgent(
name="Code_Reviewer",
is_termination_msg=termination_msg,
system_message="You are a code reviewer. Reply `TERMINATE` in the end when everything is done.",
llm_config=llm_config,
)
PROBLEM = "How to use spark for parallel training in FLAML? Give me sample code."
def _reset_agents():
boss.reset()
boss_aid.reset()
coder.reset()
pm.reset()
reviewer.reset()
def rag_chat():
_reset_agents()
groupchat = autogen.GroupChat(
agents=[boss_aid, coder, pm, reviewer], messages=[], max_round=12, speaker_selection_method="round_robin"
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
# Start chatting with boss_aid as this is the user proxy agent.
boss_aid.initiate_chat(
manager,
problem=PROBLEM,
n_results=3,
)
def norag_chat():
_reset_agents()
groupchat = autogen.GroupChat(
agents=[boss, coder, pm, reviewer],
messages=[],
max_round=12,
speaker_selection_method="auto",
allow_repeat_speaker=False,
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
# Start chatting with the boss as this is the user proxy agent.
boss.initiate_chat(
manager,
message=PROBLEM,
)
def call_rag_chat():
_reset_agents()
# In this case, we will have multiple user proxy agents and we don't initiate the chat
# with RAG user proxy agent.
# In order to use RAG user proxy agent, we need to wrap RAG agents in a function and call
# it from other agents.
def retrieve_content(message, n_results=3):
boss_aid.n_results = n_results # Set the number of results to be retrieved.
# Check if we need to update the context.
update_context_case1, update_context_case2 = boss_aid._check_update_context(message)
if (update_context_case1 or update_context_case2) and boss_aid.update_context:
boss_aid.problem = message if not hasattr(boss_aid, "problem") else boss_aid.problem
_, ret_msg = boss_aid._generate_retrieve_user_reply(message)
else:
ret_msg = boss_aid.generate_init_message(message, n_results=n_results)
return ret_msg if ret_msg else message
boss_aid.human_input_mode = "NEVER" # Disable human input for boss_aid since it only retrieves content.
llm_config = {
"functions": [
{
"name": "retrieve_content",
"description": "retrieve content for code generation and question answering.",
"parameters": {
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "Refined message which keeps the original meaning and can be used to retrieve content for code generation and question answering.",
}
},
"required": ["message"],
},
},
],
"config_list": config_list,
"timeout": 60,
"cache_seed": 42,
}
for agent in [coder, pm, reviewer]:
# update llm_config for assistant agents.
agent.llm_config.update(llm_config)
for agent in [boss, coder, pm, reviewer]:
# register functions for all agents.
agent.register_function(
function_map={
"retrieve_content": retrieve_content,
}
)
groupchat = autogen.GroupChat(
agents=[boss, coder, pm, reviewer],
messages=[],
max_round=12,
speaker_selection_method="random",
allow_repeat_speaker=False,
)
manager_llm_config = llm_config.copy()
manager_llm_config.pop("functions")
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=manager_llm_config)
# Start chatting with the boss as this is the user proxy agent.
boss.initiate_chat(
manager,
message=PROBLEM,
)
call_rag_chat()
Here is the execution result:
-> i have tried with gpt-3.5-turbo-0613 and gpt-4 -> i have tried with export AUTOGEN_USE_DOCKER=0 and export AUTOGEN_USE_DOCKER=1 and both seem to not be able to use the function calling now. The issue before was that a function call was being suggested but then not able to be acted upon due to tool_calls being None.
Now however, it seems i just made the script again and i am not able to get it to reproduce even making the function call in the first place.
I have been able to get my usecase working so this is not a burning issue anymore but i had to go a different way around, I am still unable to just copy the setup and run it.
@LUK3ARK I had the same problem, how did you fix it?
I rolled back the version to 0.2.3 and the issue stopped happening
How can I solve the problem with "file not found error". I'm using the notebook code as well. But I don't know how to deal with it.
@thinkall could you take a note of this thread and inform the users in your RAG refactor roadmap?
How can I solve the problem with "file not found error". I'm using the notebook code as well. But I don't know how to deal with it.
You need to either put your config file in the folder of the notebook or specifiy the file_location=<direcotry of the file OAI_CONFIG_LIST>
to the function config_list_from_json
.
Describe the bug
I picked out the RAG groupchat example from the notebook and made several variations of it. I have been able to get it working if i chat only with the retrieve agent but when I initiate a conversation with the group manager, it suggests a function call and then fails.
When i run the agentchat_groupchat_RAG.ipynb example it suggests to call the retrieve function and then fails with this issue:
openai.BadRequestError: Error code: 400 - {'error': {'message': "None is not of type 'array' - 'messages.2.tool_calls'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
I get this error with MULTIPLE different functions.
I have ben debugging this for a while and my current conclusion that this bug comes from the groupchat manager itself. I tried hardcoding it so that an empty list is provided instead of None but they it says the message is still short.
I know that OpenAI is deprecating function calling and wanted to know if this is expected behaviour or if i am missing something vital?
Again - seems to only come when a manager is in the middle of the conversation, but have been able to see other agents call functions within a chat.
Steps to reproduce
Copy the agentchat_groupchat_RAG.ipynb and run it
Expected Behavior
When using the call rag function it should call the retrieve content function correctly
I expect to be able to reproduce the same results as the examples out of the box.
Screenshots and logs
lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "None is not of type 'array' - 'messages.2.tool_calls'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Additional Information
consistent in all versions of pyautogen down to 2.0.4, then it kind of works but other issues start tangling.