Open lucasjinreal opened 9 months ago
You need to use is_termination_msg
instead of commenting it out. You can search the samples in the repo to see various ways of using this.
You need to use
is_termination_msg
instead of commenting it out. You can search the samples in the repo to see various ways of using this.
One more comment, to make is_termination_msg work there also should be a TERMINATE in the message from assistant. We can modify the system_message of Writer node as follows:
Writer: you response for write article based on information you have. Add 'TERMINATE' at the end as a newline when you finish writing.
@rickyloynd-microsoft @hughlv thanks a ton for the help! it worked like a charm.
May I ask further, how to add a request interval for every single request to LLM? I found openai has QPS limitation to unpayed users. (and most API provide have it) So I have to constraint the request frequency from agent
What do you mean by request interval?
Got some error:
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo-1106 in organization org-8wQNKqvcyzwI6J6N0ZAS7X20 on requests per min (RPM): Limit 3, Used 3, Requested 1. Please try again in 20s. Visit https://platform.openai.com/account/rate-limits to learn more. You can increase your rate limit by adding a payment method to your account at https://platform.openai.com/account/billing.', 'type': 'requests', 'param': None, 'code': 'rate_limit_exceeded'}}
Seems limited by openai, Could it possible make the agents send request to LLM server with some set time intervals?
BTW, I finally make the agents can write article, by search news and gather information it self.
However, the article written am not sure if it is hullucinated.
Is Autogen have this feature that converts urls to content by get the html?
this is how my agents define:
searcher = autogen.AssistantAgent(
name="Searcher",
llm_config=llm_config,
system_message="Searcher. You response for search content, using certain tools get information you need.",
)
writer_assistant = autogen.AssistantAgent(
name="Writer",
llm_config=llm_config_no_func,
system_message="Writer: you response for write article based on information you have, 4000 words at least. Add 'TERMINATE' at the end as a newline when you finish writing.",
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
# human_input_mode="TERMINATE",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"work_dir": "web"},
# llm_config=llm_config_no_func,
system_message="""When a link is provided, you should ask the assistant for fetching the content. Reply TERMINATE if the task has been solved at full satisfaction.Otherwise, reply CONTINUE, or the reason why the task is not solved yet.""",
function_map={
"exchange_rate": currency_calculator,
"search_google": search_google_news,
},
)
Could it possible make the agents send request to LLM server with some set time intervals?
In llm_config, you can specify a "timeout" period in seconds, so that the request (with no response) will not be retried until that time interval has passed. But you may still hit the RateLimitError, depending on how many requests your app is making.
Indeed, so is there any way to make the request happened inside autogen can forced have a interval?
Is Autogen have this feature that converts urls to content by get the html?
Indeed, so is there any way to make the request happened inside autogen can forced have a interval?
No, but your app can call sleep() to wait as long as you want before calls.
I don't think I can insert sleep into agent calling without changing autogen source code.
Even modifying or even cloning the repo, you can create a new agent, inheriting from some other agent like AssistantAgent, then put your code into the new agent's registered reply function. But it all depends on your use case and how much coding you want to do. Even without coding, to reduce the rate limit errors from openai, for many use cases you could just give the entire multi-agent system one problem, then sleep before giving the next one.
@lucasjinreal a tricky way is to add sleep in your is_termination_msg
checking function.
My current situation, the writer can write article. But when it done, it wont stop :
this is how i call it:
Actually, I found in many scenarios, the agent won't stop gracefully.
I really want to know how to make it gracefully stop?