microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.64k stars 4.6k forks source link

[Feature Request]: autogen/conversable_agent.py ----- summary_args lack of options for "reflection and self-criticism" #2621

Open wangruxun opened 5 months ago

wangruxun commented 5 months ago

Is your feature request related to a problem? Please describe.

DEFAULT_SUMMARY_PROMPT = "Summarize the takeaway from the conversation. Do not add any introductory phrases." in the summary_args parameter is just a simple summary of the conversation. However, reflection and self-criticism of the big model is a core capability of the intelligent agent, but this capability is not built into the conversational agent, which is unreasonable.
  "summary_method": "reflection_with_llm", this may cause misunderstandings because it is just a summary without reflection and self-criticism.It should be defined as "summary_with_llm", introducing new options represent "reflection_with_llm"

Describe the solution you'd like

  1. I suggest adding: DEFAULT_REFLECTION SELF-CRITICISM_SUMMARY_PROMPT = " Why you give the thought. Around 150 words. As a super agent, constructive self-criticism of the current machine evaluationon its weakness and strength and summarize"

  2. (2.1)Before modification: Supported strings are "last_msg" and "reflection_with_llm":

    • when set to "last_msg", it returns the last message of the dialog as the summary.
    • when set to "reflection_with_llm", it returns a summary extracted using an llm client. llm_config must be set in either the recipient or sender. The description of reflection_with_llm is inaccurate. Currently, it is just a summary. The name should be changed to summary_with_llm (2.2)After modification: Supported strings are "last_msg" 、 "summary_with_llm" and "reflection_with_llm"::
    • when set to "last_msg", it returns the last message of the dialog as the summary.
    • when set to "summary_with_llm", it returns a summary extracted using an llm client.
    • when set to "reflection_with_llm", it returns a reflection and self-criticism extracted using an llm client. llm_config must be set in either the recipient or sender. 3、For example: chat_results = await user.a_initiate_chats( [ { "chat_id": 1, "recipient": financial_assistant, "message": financial_tasks[0], "silent": False, "summary_method": "summary_with_llm",#this only contain ”summary" }, { "chat_id": 2, "prerequisites": [1], "recipient": research_assistant, "message": financial_tasks[1], "silent": False, "summary_method": "reflection_with_llm" ,# this contains reflection, self-criticism and summary. },

Additional context

summary_with_llm

ekzhu commented 5 months ago

You can customize the summary_args by setting summary_args={"summary_prompt": <your prompt with reflection>}.

wangruxun commented 5 months ago

(1)I think an option should be defined instead of letting users fill it in themselves. (2)In addition, the current reflection_with_llm should be modified to summary_with_llm, or a reflection_with_llm that provides real functions instead of a brief summary.Because the current reflection_with_llm is just a summary, and does not describe real reflection

ekzhu commented 5 months ago

The prompt itself needs to be tuned for different LLMs, and we are providing a default that works okay with OpenAI's model. So, users often do need to customize the prompt anyway.

Because the current reflection_with_llm is just a summary, and does not describe real reflection

Sure, though changing the API at this point is too late and breaks existing code.

cc @qingyun-wu

wangruxun commented 5 months ago

Yes. Because I saw other agents provide reflection options. But if you don't want to change the interface, you can provide a reference example for summary_prompt.

ekzhu commented 4 months ago

But if you don't want to change the interface, you can provide a reference example for summary_prompt.

This is a good idea. Do you want to take a look at the tutorial page: https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns

There are several examples of using summary_prompt however the code is outdated as summary_prompt is not the top-level argument anymore, it should be:

{
    "recipient": group_chat_manager_with_intros,
    "summary_method": "reflection_with_llm",
    "summary_args":  {"summary_prompt": "Summarize the sequence of operations used to turn " "the source number into target number."},
},

Would you like to help fixing this?

wangruxun commented 4 months ago

I'd like to help resolve this issue。I can provide an example of reflection

According to your current configuration, if summary_prompt in summary_args wants to take effect, you must use ""summary_method": "reflection_with_llm"," but in fact this setting will not take effect. Only "summary_prompt" takes effect