Open gamercoder153 opened 5 months ago
each agent store their last message in their history , or you can simply ready the python file as a string then pass it as an input accordingly. hope this helps !
@Josephrp I want to store the output as a variable so that it would be easy to import in other files
yes , just do it like you would to register data to any variable. variable = user_proxy.chat_messages[critic][-1]['content']
hope this helps :-) :-)
@Josephrp Its working perfectly, but its storing full chat conversation between user proxy, writer and critic agent
WHO???
I don't even know who u are how am I supposed to steal ur money
Iam not a hacker
@Deykeepcomn4me I really dont understand whats happening here
Hi @gamercoder153, I think you can do
chat_res = user_proxy.initiate_chat(recipient=writer, message=task, max_turns=3, summary_method="last_msg")
variable = chat_res.summary
The summary is supposed to contain a summary of the chat, which in your case should be the last message because you set "summary_method" to be "last_msg". Hopefully, this helps!
And, sorry about the confusing messages from the other user.
@qingyun-wu Thanks, but even though it is storing the whole conversation between agents in the variable.
@gamercoder153 use can also leverage the loggers to make this possible. give that a try if it works for the usecase
chat_result.chat_histort is a list made of dictionary, where every item contains "content", "role", "name". You can filter the messages through specific " role" or "name". Such as, if you just want messages of "Critic", [ d for d in chat_result.chat_history if "Critic"==d['name] ] will returns the contents about "Critic".
I am working on a project in which each .py file contains a nested chat between 2 agents. I have 3 .py files and I want the output of the first .py file to be passed as an input to the second .py file, and so on for the third .py file.
I am unable to figure out how to store the last output from one file and pass it to another file.
This is an example of a nested chat Python code:
`import autogen
config_list = autogen.config_list_from_json(env_or_file="OAI_CONFIG_LIST.json") llm_config = {"config_list": config_list}
task = """Write a concise but engaging blogpost about Agentic frameoworks"""
writer = autogen.AssistantAgent( name="Writer", llm_config={"config_list": config_list}, system_message=""" You are a professional writer, known for your insightful and engaging articles. You transform complex concepts into compelling narratives in funny style, You should improve the quality of the content based on the feedback from the user. """, )
user_proxy = autogen.UserProxyAgent( name="User", human_input_mode="NEVER", is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0, code_execution_config={ "last_n_messages": 1, "work_dir": "my_code", "use_docker": False, } )
critic = autogen.AssistantAgent( name="Critic", llm_config={"config_list": config_list}, system_message=""" You are a critic, known for your thoroughness and commitment to standards. Your task is to review content and suggest improvements. """, )
def reflection_message(recipient, messages, sender, config): print("Reflecting...") return f"Reflect and provide critique on the following writing. \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}"
user_proxy.register_nested_chats( [ { "recipient": critic, "message": reflection_message, "summary_method": "last_msg", "max_turns": 1 } ], trigger=writer )
user_proxy.initiate_chat(recipient=writer, message=task, max_turns=3, summary_method="last_msg")`