microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.45k stars 4.58k forks source link

how do i get each agent message? instead of agent last message? #478

Closed xmi1an closed 9 months ago

xmi1an commented 11 months ago

how do i get each agent message? instead of agent last message?

sonichi commented 11 months ago

https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#chat_messages

sonichi commented 11 months ago

Candidate for FAQ.

xmi1an commented 11 months ago

https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#chat_messages

But how can during conversation ? i mean chat_messages print after all convertion finish. right?

afourney commented 11 months ago

No the property provides access to a dictionary of all the messages sent or received by the agent. It is organized by conversation partner, and it should be up to date whenever you access it.

Note that the key to the dictionary is the actual Agent instance (not a string name, etc.)

xmi1an commented 11 months ago

No the property provides access to a dictionary of all the messages sent or received by the agent. It is organized by conversation partner, and it should be up to date whenever you access it.

Note that the key to the dictionary is the actual Agent instance (not a string name, etc.)

Then How can i get each messages of agent during conversion? Because chat_messages property provides after whole task completes.

victordibia commented 11 months ago

@xmi1an you can hook in function that is called whenever an agent receives a message.

Assuming you have two agents user_proxy and assistant, you can register a function with these agents that gets called with some parameters.


def print_messages(recipient, messages, sender, config): 
    if "callback" in config and  config["callback"] is not None:
        callback = config["callback"]
        callback(sender, recipient, messages[-1])
    print(f"Messages sent to: {recipient.name} | num messages: {len(messages)}")
    return False, None  # required to ensure the agent communication flow continues

user_proxy.register_reply(
    [autogen.Agent, None],
    reply_func=print_messages, 
    config={"callback": None},
)

assistant.register_reply(
    [autogen.Agent, None],
    reply_func=print_messages, 
    config={"callback": None},
) 

In the above, we register a print_messages message that is called each time the agent receive a message.

SinaRanjkeshzade commented 10 months ago

According to the source code of Autogen, this class solves this problem

import os
import autogen

class AutoGenChatter:
  def __init__(self, autogen_user_proxy, autogen_assistant):
    self.user_proxy = autogen_user_proxy
    self.assistant = autogen_assistant

  async def chat(self, message, silent=False):
    sender = self.user_proxy
    message = {"content":message}
    self.assistant._process_received_message(message, sender, silent)
    response = await self.assistant.a_generate_reply(sender=sender)
    return response

if __name__ == "__main__":
  config_list = [
        {
            'model': 'gpt-3.5-turbo-16k',
            'api_key': os.getenv("OPENAI_API_KEY"),
        }
    ]

  llm_config = {"config_list": config_list, "cache_seed": 35, "timeout":360}
  assistant_message = '''
                    Answer any question user asks you politely with details.
                '''

  user_proxy_message = '''
                You are a helpful assistant who answers any questions.
            '''

  test_assistant = autogen.AssistantAgent(
            name="test_assistant",
            system_message=assistant_message,
            llm_config=llm_config,
        )

  test_user_proxy = autogen.UserProxyAgent(
        name="Dave",
        human_input_mode="ALWAYS",

    )
  test_assistant.reset()
  autogen_chatter = AutoGenChatter(test_user_proxy, test_assistant)
  response= autogen_chatter.chat(user_proxy_message)
  print(response)

After you defined the instance of this class, you can use chat function to do that. Instead of calling initiate_chat function, call this. In your first call, give initiate_chat message input as this function's input to define the problem. Then you continue chat. Notice that for simplicity I removed request_reply variable. Don't set silent to false. This causes the response of chat to be None.

beckortikov commented 6 months ago

@xmi1an you can hook in function that is called whenever an agent receives a message.

Assuming you have two agents user_proxy and assistant, you can register a function with these agents that gets called with some parameters.

def print_messages(recipient, messages, sender, config): 
    if "callback" in config and  config["callback"] is not None:
        callback = config["callback"]
        callback(sender, recipient, messages[-1])
    print(f"Messages sent to: {recipient.name} | num messages: {len(messages)}")
    return False, None  # required to ensure the agent communication flow continues

user_proxy.register_reply(
    [autogen.Agent, None],
    reply_func=print_messages, 
    config={"callback": None},
)

assistant.register_reply(
    [autogen.Agent, None],
    reply_func=print_messages, 
    config={"callback": None},
) 

In the above, we register a print_messages message that is called each time the agent receive a message.

How to use it then?

ekzhu commented 6 months ago

can you elaborate what's your usage case?

AndriCause commented 4 months ago

@ekzhu I would like to save the chat messages in the DB of a website, and doing it with an async function while the agents are conversing through a group chat would be very useful, so that the user can also see who says what, while the discussion is being made. Could you please let me know an example of how this could be achieved?

ekzhu commented 4 months ago

You can take a look at autogen studio in samples/apps. It realizes the functionality you mentioned. Cc @victordibia