microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.57k stars 4.59k forks source link

Please make this demo work: make agent search content and write an article based on it #1085

Open lucasjinreal opened 9 months ago

lucasjinreal commented 9 months ago

My current situation, the writer can write article. But when it done, it wont stop :

image

this is how i call it:


searcher = autogen.AssistantAgent(
    name="Searcher",
    llm_config=llm_config,
    system_message="Searcher. You response for search content, using certain tools get information you need.",
)

writer_assistant = autogen.AssistantAgent(
    name="Writer",
    llm_config=llm_config_no_func,
    system_message="Writer: you response for write article based on information you have.",
)

# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    # human_input_mode="TERMINATE",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    # is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={"work_dir": "web"},
    # llm_config=llm_config_no_func,
    system_message="""When a link is provided, you should ask the assistant for fetching the content. Reply TERMINATE if the task has been solved at full satisfaction.Otherwise, reply CONTINUE, or the reason why the task is not solved yet.""",
    function_map={
        "exchange_rate": currency_calculator,
        "search_google": search_google_news,
    },
)

# user_proxy.register_function(function_map=)

print(searcher.llm_config)

groupchat = autogen.GroupChat(
    agents=[user_proxy, searcher, writer_assistant], messages=[], max_round=12
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(
    manager,
    message="search the lastest news about Elon Musk, and generate an article on it, a very detailed article.",
)

Actually, I found in many scenarios, the agent won't stop gracefully.

I really want to know how to make it gracefully stop?

rickyloynd-microsoft commented 9 months ago

You need to use is_termination_msg instead of commenting it out. You can search the samples in the repo to see various ways of using this.

hughlv commented 9 months ago

You need to use is_termination_msg instead of commenting it out. You can search the samples in the repo to see various ways of using this.

One more comment, to make is_termination_msg work there also should be a TERMINATE in the message from assistant. We can modify the system_message of Writer node as follows:

Writer: you response for write article based on information you have. Add 'TERMINATE' at the end as a newline when you finish writing.

lucasjinreal commented 9 months ago

@rickyloynd-microsoft @hughlv thanks a ton for the help! it worked like a charm.

May I ask further, how to add a request interval for every single request to LLM? I found openai has QPS limitation to unpayed users. (and most API provide have it) So I have to constraint the request frequency from agent

rickyloynd-microsoft commented 9 months ago

What do you mean by request interval?

lucasjinreal commented 9 months ago

Got some error:

openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo-1106 in organization org-8wQNKqvcyzwI6J6N0ZAS7X20 on requests per min (RPM): Limit 3, Used 3, Requested 1. Please try again in 20s. Visit https://platform.openai.com/account/rate-limits to learn more. You can increase your rate limit by adding a payment method to your account at https://platform.openai.com/account/billing.', 'type': 'requests', 'param': None, 'code': 'rate_limit_exceeded'}}

Seems limited by openai, Could it possible make the agents send request to LLM server with some set time intervals?

BTW, I finally make the agents can write article, by search news and gather information it self.

However, the article written am not sure if it is hullucinated.

Is Autogen have this feature that converts urls to content by get the html?

image

this is how my agents define:

searcher = autogen.AssistantAgent(
    name="Searcher",
    llm_config=llm_config,
    system_message="Searcher. You response for search content, using certain tools get information you need.",
)

writer_assistant = autogen.AssistantAgent(
    name="Writer",
    llm_config=llm_config_no_func,
    system_message="Writer: you response for write article based on information you have, 4000 words at least. Add 'TERMINATE' at the end as a newline when you finish writing.",
)

# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    # human_input_mode="TERMINATE",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={"work_dir": "web"},
    # llm_config=llm_config_no_func,
    system_message="""When a link is provided, you should ask the assistant for fetching the content. Reply TERMINATE if the task has been solved at full satisfaction.Otherwise, reply CONTINUE, or the reason why the task is not solved yet.""",
    function_map={
        "exchange_rate": currency_calculator,
        "search_google": search_google_news,
    },
)
rickyloynd-microsoft commented 9 months ago

Could it possible make the agents send request to LLM server with some set time intervals?

In llm_config, you can specify a "timeout" period in seconds, so that the request (with no response) will not be retried until that time interval has passed. But you may still hit the RateLimitError, depending on how many requests your app is making.

lucasjinreal commented 9 months ago

Indeed, so is there any way to make the request happened inside autogen can forced have a interval?

rickyloynd-microsoft commented 9 months ago

Is Autogen have this feature that converts urls to content by get the html?

https://github.com/microsoft/autogen/pull/1093

rickyloynd-microsoft commented 9 months ago

Indeed, so is there any way to make the request happened inside autogen can forced have a interval?

No, but your app can call sleep() to wait as long as you want before calls.

lucasjinreal commented 9 months ago

I don't think I can insert sleep into agent calling without changing autogen source code.

rickyloynd-microsoft commented 9 months ago

Even modifying or even cloning the repo, you can create a new agent, inheriting from some other agent like AssistantAgent, then put your code into the new agent's registered reply function. But it all depends on your use case and how much coding you want to do. Even without coding, to reduce the rate limit errors from openai, for many use cases you could just give the entire multi-agent system one problem, then sleep before giving the next one.

thinkall commented 3 months ago

@lucasjinreal a tricky way is to add sleep in your is_termination_msg checking function.