microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
32.37k stars 4.72k forks source link

[Issue]: logging level for unset IOStream should be debug or info, not warning? #2249

Closed Duncan-Haywood closed 6 months ago

Duncan-Haywood commented 6 months ago

Describe the issue

Just a preference and maybe a clarification request. The warning I'm getting is WARNING:autogen.io.base:No default IOStream has been set, defaulting to IOConsole. But I feel like that doesn't need to be a warning level log. Or if it is a big problem, the way to fix it is not very clear. I tried adding this line, but it didn't solve the problem: autogen.io.IOStream.set_default(autogen.io.IOConsole()) Any advice anyone?

Steps to reproduce

import autogen import logging from dotenv import load_dotenv

load_dotenv() autogen.DEFAULT_MODEL = "gpt-4-0125-preview" autogen.FAST_MODEL = "gpt-3.5-turbo"

logger = logging.getLogger(name)

def main(messages: list[dict], prompt):

get conversation prompt

# create assistant agent

user_proxy_agent = autogen.ConversableAgent(
    "user-proxy-agent",
    system_message="you answer questions given you.",
    human_input_mode="NEVER",
    llm_config={"model": autogen.FAST_MODEL},
    description="The conversational agent that proxies the user's responses to the final question asked to it. .",
)
initial_question_agent = autogen.ConversableAgent(
    "initial-response-agent",
    system_message=f"You follow the following instructions delimited by <<>>> in respect to where the conversation is already. <<<{prompt}>>>",
    human_input_mode="NEVER",
    llm_config={"model": autogen.FAST_MODEL},
    description="The conversational agent that follows the inteview agenda from the interview prompt to create an initial draft the next interview question to ask next. This is called first.",
)
question_evaluator_agent = autogen.ConversableAgent(
    "response-evaluator-agent",
    system_message=f"please evaluate the previous message from the intial response agent based on the following prompt delimited by <<<>>> and evaluate whether the message followed the prompt correctly and give feedback on where it didn't do it correctly and how to fix it. <<<{prompt}>>>",
    human_input_mode="NEVER",
    llm_config={"model": autogen.FAST_MODEL},
    description="The conversational agent that evaluates the response from the initial response agent and gives feedback on how to improve the response. This is called second.",
)
final_question_agent = autogen.ConversableAgent(
    "final-response-agent",
    system_message=f"given the last feedback and the original response and the following prompt instructions in <<<>>>, give a final version of a response/question to the user: <<<{prompt}>>>",
    human_input_mode="NEVER",
    llm_config={"model": autogen.FAST_MODEL},
    description="The conversational agent that gives the final response based on the feedback from the response evaluator agent and the original response. This is called third.",
)
# final_question_agent.register_nested_chats()

# create group chat 
group_chat = autogen.GroupChat(
    [
        user_proxy_agent,
        initial_question_agent,
        question_evaluator_agent,
        final_question_agent,
    ],
    messages,
    max_round=10,
)
group_chat_manager = autogen.GroupChatManager(group_chat)
# generate response
response = group_chat_manager.run_chat(messages=messages, sender=initial_question_agent, config=group_chat)
return group_chat.messages

if name == "main": logging.basicConfig(level=logging.INFO) prompt = "this is a test prompt. ask interview questions to aquire a customer for Chatgems." main([{"role": "user", "content": ""}], prompt)

Screenshots and logs

WARNING:autogen.io.base:No default IOStream has been set, defaulting to IOConsole.

Additional Information

Thanks for any help. version: version = "0.2.21" also have autogenstudio installed in this virtual environment.

Duncan-Haywood commented 6 months ago

The reason it was bothering me is that when I'm using it as a "CLI" for testing, the warning messages keep popping up like so

--------------------------------------------------------------------------------
No default IOStream has been set, defaulting to IOConsole.
No default IOStream has been set, defaulting to IOConsole.
response-evaluator-agent (to chat_manager):

Your response effectively builds upon the user's acknowledgment of the benefits of real-time performance monitoring and analytics, linking it to Chatgems' capabilities. By asking how the insights will be integrated into their workflows and measuring impact, you keep the conversation focused on tangible outcomes. Great job!

which would be nice not to have.

it's possible to remove these by using this line in the if name == "main" block: autogen.logger.setLevel(logging.ERROR) but I feel like this is something that should possibly be at a lower log level?

If not, hopefully I can figure out how to set IOStream; the following didn't help for it: autogen.io.IOStream.set_default(autogen.io.IOConsole())

ekzhu commented 6 months ago

This is resolved and will be fixed in the next release. #2207

Duncan-Haywood commented 6 months ago

Thanks @ekzhu