Closed adriangalilea closed 1 year ago
Thanks, am looking into this
...
Termination.
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
admin (to chat_manager):
--------------------------------------------------------------------------------
[autogen.oai.completion: 10-04 22:44:34] {237} INFO - retrying in 10 seconds...
Traceback (most recent call last):
File "/Users/adrian/Developer/microsoft-autogen-experiments/venv/lib/python3.11/site-packages/autogen/oai/completion.py", line 209, in _get_response
response = openai_completion.create(request_timeout=request_timeout, **config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adrian/Developer/microsoft-autogen-experiments/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adrian/Developer/microsoft-autogen-experiments/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/adrian/Developer/microsoft-autogen-experiments/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adrian/Developer/microsoft-autogen-experiments/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/Users/adrian/Developer/microsoft-autogen-experiments/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: Rate limit reached for default-gpt-4 in organization [REDACTED] on tokens per min. Limit: 40000 / min. Please try again in 1ms. Contact us through our help center at help.openai.com if you continue to have issues.
@adriangalilea Just a reminder that you are exposing an API key from your repo, please make your repo private or remove sensitive information ASAP
I think you are refering to this: https://github.com/JayZeeDesign/microsoft-autogen-experiments/issues/1
It's the original user which I notified and disabled the API keys,
Thank you tho.
Btw identified another instance of this:
(Note: The blog content has been generated based on the given prompts and has been tweaked to provide a concise step-by-step guide. TERMINATE)
--------------------------------------------------------------------------------
admin (to chat_manager):
--------------------------------------------------------------------------------
admin (to chat_manager):
I think the problem is this:
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "coding"},
is_termination_msg=lambda x: x.get("content", "") and x.get(
"content", "").rstrip().endswith("TERMINATE"),
human_input_mode="NEVER",
function_map={
"search": search,
"scrape": scrape,
}
)
The endswith part.
Yes you are right. the is_termination_msg
asking for an exact match of TERMINATE
but admin replies (...TERMINATE)
or Terminate.
@sonichi We would need a more robust method to terminate the chat. How about making it a terminate function for admin to call via function_call
Yes you are right. the
is_termination_msg
asking for an exact match ofTERMINATE
but admin replies(...TERMINATE)
orTerminate.
@sonichi We would need a more robust method to terminate the chat. How about making it a terminate function for admin to call via
function_call
Do you happen to have any example at hand?
@adriangalilea haven't yet. I can create a demo example in #102 to give it a try though.
In the meantime, after a closer look, it seems that the empty message is because admin
doesn't have a llm_config
, so it uses its default_reply
which is an empty string.
@LittleLittleCloud Hello,
I've been trying to search for documentation of the llm_config and I didn't find much helpful information.
What would you say it's a good default_reply
to avoid these kind of errors?
Also, by a terminate function you mean a function that simply outputs TERMINATE?
Thank you so much!
Yep, you can set default_auto_reply
to TERMINATE
and is_termination_msg
to lambda x: x["content"] == "TERMINATE"
to avoid an infinite loop.
Here's the complete code, you might want to reply JulyChat
to GPT-3.5-turbo
or other model suppose you are using openai models.
file: content_agent.py
import os
from autogen import config_list_from_json
import autogen
import requests
from bs4 import BeautifulSoup
import json
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain import PromptTemplate
import openai
from dotenv import load_dotenv
# Get API key
load_dotenv()
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST.json")
browserless_api_key = os.getenv("BROWSERLESS_API_KEY")
X_API_KEY = os.getenv("X_API_KEY")
# Define research function
def search(query):
url = "https://google.serper.dev/search"
payload = json.dumps({
"q": query
})
headers = {
'X-API-KEY': X_API_KEY,
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
return response.json()
def scrape(url: str):
# scrape website, and also will summarize the content based on objective if the content is too large
# objective is the original objective & task that user give to the agent, url is the url of the website to be scraped
print("Scraping website...")
# Define the headers for the request
headers = {
'Cache-Control': 'no-cache',
'Content-Type': 'application/json',
}
# Define the data to be sent in the request
data = {
"url": url
}
# Convert Python object to JSON string
data_json = json.dumps(data)
# Send the POST request
response = requests.post(
f"https://chrome.browserless.io/content?token={browserless_api_key}", headers=headers, data=data_json)
# Check the response status code
if response.status_code == 200:
soup = BeautifulSoup(response.content, "html.parser")
text = soup.get_text()
print("CONTENTTTTTT:", text)
if len(text) > 8000:
output = summary(text)
return output
else:
return text
else:
print(f"HTTP request failed with status code {response.status_code}")
return f"HTTP request failed with status code {response.status_code}"
def summary(content):
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k-0613")
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n"], chunk_size=10000, chunk_overlap=500)
docs = text_splitter.create_documents([content])
map_prompt = """
Write a detailed summary of the following text for a research purpose:
"{text}"
SUMMARY:
"""
map_prompt_template = PromptTemplate(
template=map_prompt, input_variables=["text"])
summary_chain = load_summarize_chain(
llm=llm,
chain_type='map_reduce',
map_prompt=map_prompt_template,
combine_prompt=map_prompt_template,
verbose=True
)
output = summary_chain.run(input_documents=docs,)
return output
def research(query):
llm_config_researcher = {
"model": "JulyChat",
"functions": [
{
"name": "search",
"description": "google search for relevant information",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Google search query",
}
},
"required": ["query"],
},
},
{
"name": "scrape",
"description": "Scraping website content based on url",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "Website url to scrape",
}
},
"required": ["url"],
},
},
],
"config_list": config_list}
researcher = autogen.AssistantAgent(
name="researcher",
system_message="Research about a given query, collect as many information as possible, and generate detailed research results with loads of technique details with all reference links attached; Add TERMINATE to the end of the research report;",
llm_config=llm_config_researcher,
)
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "coding"},
is_termination_msg=lambda x: x["content"] == "TERMINATE",
default_auto_reply="TERMINATE",
human_input_mode="NEVER",
function_map={
"search": search,
"scrape": scrape,
}
)
user_proxy.initiate_chat(researcher, message=query)
# set the receiver to be researcher, and get a summary of the research report
user_proxy.stop_reply_at_receive(researcher)
user_proxy.send(
"Give me the research report that just generated again, return ONLY the report & reference links", researcher)
# return the last message the expert received
return user_proxy.last_message()["content"]
# Define write content function
def write_content(research_material, topic):
gpt_35_config = {
"model": "JulyChat",
"config_list": config_list,
}
editor = autogen.AssistantAgent(
name="editor",
system_message="You are a senior editor of an AI blogger, you will define the structure of a short blog post based on material provided by the researcher, and give it to the writer to write the blog post",
llm_config=gpt_35_config,
)
writer = autogen.AssistantAgent(
name="writer",
system_message="You are a professional AI blogger who is writing a blog post about AI, you will write a short blog post based on the structured provided by the editor, and feedback from reviewer; After 2 rounds of content iteration, add TERMINATE to the end of the message",
llm_config=gpt_35_config,
)
reviewer = autogen.AssistantAgent(
name="reviewer",
system_message="You are a world class hash tech blog content critic, you will review & critic the written blog and provide feedback to writer.After 2 rounds of content iteration, add TERMINATE to the end of the message",
llm_config=gpt_35_config,
)
user_proxy = autogen.UserProxyAgent(
name="admin",
system_message="A human admin. Interact with editor to discuss the structure. Actual writing needs to be approved by this admin.",
code_execution_config=False,
is_termination_msg=lambda x: x["content"] == "TERMINATE", # this ends the conversation
human_input_mode="NEVER", # along with default_auto_reply="TERMINATE", this will make the agent to reply "TERMINATE" automatically
default_auto_reply="TERMINATE",
)
groupchat = autogen.GroupChat(
agents=[user_proxy, editor, writer, reviewer],
messages=[],
max_round=20)
manager = autogen.GroupChatManager(groupchat=groupchat)
# phase 1
# collaborate with editor, reviewer and writer over the structure of the blog and the content
user_proxy.initiate_chat(
manager, message=f"Write a blog about {topic}, here are the material: {research_material}.")
# phase 2
# ask writer to write the blog based on the structure
user_proxy.send("writer, now complete the blog based on feedback you already received. Let me know once you complete", manager)
# return the last message from writer
writer_messages = filter(lambda x: x["name"] == "writer", groupchat.messages)
last_writer_message = list(writer_messages)[-1]
return last_writer_message['content']
# Define content assistant agent
llm_config_content_assistant = {
"model": "JulyChat",
"functions": [
{
"name": "research",
"description": "research about a given topic, return the research material including reference links",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The topic to be researched about",
}
},
"required": ["query"],
},
},
{
"name": "write_content",
"description": "Write content based on the given research material & topic",
"parameters": {
"type": "object",
"properties": {
"research_material": {
"type": "string",
"description": "research material of a given topic, including reference links when available",
},
"topic": {
"type": "string",
"description": "The topic of the content",
}
},
"required": ["research_material", "topic"],
},
},
],
"config_list": config_list}
writing_assistant = autogen.AssistantAgent(
name="writing_assistant",
system_message="You are a writing assistant, you can use research function to collect latest information about a given topic, and then use write_content function to write a very well written content; Reply TERMINATE when your task is done",
llm_config=llm_config_content_assistant,
)
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
human_input_mode="NEVER",
default_auto_reply="TERMINATE",
is_termination_msg=lambda x: x["content"] == "TERMINATE",
function_map={
"write_content": write_content,
"research": research,
}
)
user_proxy.initiate_chat(
writing_assistant, message="write a blog about autogen multi AI agent framework")
An example output
FWIW, we've been using:
is_termination_msg = lambda x: True if "TERMINATE" in x.get("content") else False,
This helps a little with the exact match, or ends with "TERMINATE" problem.
Finding a more robust means of detecting termination is definitely on our to-do list!
With that approach there's the slight chance of it stopping if for whatever reason TERMINATE appears.
I think the function approach might be less error prone at the cost of some context but not sure how to build it, waiting for that example.
Not in this case. The group chat will be terminated only when user_proxy replies ‘TERMINATE’. Considering that user_proxy in this example is not backed by LLM and will always return the default message, which is TERMINATE. This makes the exit check logic here quite robust
things will be different when user_proxy is powered by an LLM, where a function call to finish a group chat would be a preferred way to go. Let me come up with an example on that scenario
@LittleLittleCloud oh wow I missed your previous message, will be testing it out, thank you so much.
EDIT: All working flawless now, no infinite loops detected, thank you again.
Are you guys present over discord?
@adriangalilea Yes I'm on discord. I'm the one who respond to you and ask if you could create this issue on github
If this issue get resolved I am going to close it. Feel free to reopen it if you have further questions
Hello,
https://github.com/JayZeeDesign/microsoft-autogen-experiments/blob/main/content_agent.py
I got a few instances of infinite loops running the above with gpt4.
Don't have the logs sorry.