Open seanzhang-zhichen opened 4 days ago
I don't know what is the question, but I will assume the question is about function calling. Perhaps the question is "Why is it not working using react prompt?"
Assuming everything else is in order, my guess is it's because you don't have function query_xxxx
. Also don't register function query_xxxx
and query_xxxxx
with the same name.
My suggestion is, change the question/issue so other can understand what is it about.
I don't know what is the question, but I will assume the question is about function calling. Perhaps the question is "Why is it not working using react prompt?"
Assuming everything else is in order, my guess is it's because you don't have function
query_xxxx
. Also don't register functionquery_xxxx
andquery_xxxxx
with the same name.My suggestion is, change the question/issue so other can understand what is it about.
i forgot push my register code.i am sure that i have register the tool. but i can not see the tool in system prompt.
Back to my previous suggestion, please change the issue to a more informative way.
We're here to help each other, but please understand we can't do anything knowing the details first. For example, in this case we don't know what model you are using (openai, non openai?), in what way you are inspecting the system prompt, etc.
You can try to use autogen logging feature to inspect the messages. Please see this for more detail and examples: https://microsoft.github.io/autogen/docs/notebooks/agentchat_logging/
For function calling, please see this example: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_currency_calculator.ipynb
import autogen import logging from virus_agent import VirusAgent from text2sql_agent import GoldenEyesAgent from autogen import ConversableAgent from autogen import config_list_from_json from autogen import register_function from typing import Annotated
config_list = config_list_from_json(env_or_file="./autogent/config.json")
llm_config = { "timeout": 600, "cache_seed": 42, "config_list": config_list, "temperature": 0, }
ReAct_prompt = """ Answer the following questions as best you can. You have access to tools provided.
Use the following format:
Question: the input question you must answer Thought: you should always think about what to do Action: the action to take Action Input: the input to the action Observation: the result of the action ... (this process can repeat multiple times) Thought: I now know the final answer Final Answer: the final answer to the original input question
Begin! Question: {input} """
def react_prompt_message(sender, recipient, context): return ReAct_prompt.format(input=context["question"])
Let's first define the assistant agent that suggests tool calls.
assistant = ConversableAgent( name="Assistant", system_message="You are a helpful assistant.", llm_config=llm_config, )
The user proxy agent is used for interacting with the assistant agent
and executes tool calls.
user_proxy = ConversableAgent( name="User", llm_config=False, is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"], human_input_mode="ALWAYS", )
Register the calculator function to the two agents.
register_function( query_xxxx, caller=assistant, # The assistant agent can suggest calls to the calculator. executor=user_proxy, # The user proxy agent can execute the calculator calls. name="query_xxxx", # By default, the function name is used as the tool name. description="分析助手", # A description of the tool. )
register_function( query_xxxxx, caller=assistant, # The assistant agent can suggest calls to the calculator. executor=user_proxy, # The user proxy agent can execute the calculator calls. name="query_xxxx", # By default, the function name is used as the tool name. description="问答助手", # A description of the tool. )
query = """ xxxx """
chat_result = user_proxy.initiate_chat( assistant, message=react_prompt_message, question=query)
print(chat_result)