Open wangruxun opened 5 months ago
You can customize the summary_args
by setting summary_args={"summary_prompt": <your prompt with reflection>}
.
(1)I think an option should be defined instead of letting users fill it in themselves. (2)In addition, the current reflection_with_llm should be modified to summary_with_llm, or a reflection_with_llm that provides real functions instead of a brief summary.Because the current reflection_with_llm is just a summary, and does not describe real reflection
The prompt itself needs to be tuned for different LLMs, and we are providing a default that works okay with OpenAI's model. So, users often do need to customize the prompt anyway.
Because the current reflection_with_llm is just a summary, and does not describe real reflection
Sure, though changing the API at this point is too late and breaks existing code.
cc @qingyun-wu
Yes. Because I saw other agents provide reflection options. But if you don't want to change the interface, you can provide a reference example for summary_prompt.
But if you don't want to change the interface, you can provide a reference example for summary_prompt.
This is a good idea. Do you want to take a look at the tutorial page: https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns
There are several examples of using summary_prompt
however the code is outdated as summary_prompt
is not the top-level argument anymore, it should be:
{
"recipient": group_chat_manager_with_intros,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt": "Summarize the sequence of operations used to turn " "the source number into target number."},
},
Would you like to help fixing this?
I'd like to help resolve this issue。I can provide an example of reflection
According to your current configuration, if summary_prompt in summary_args wants to take effect, you must use ""summary_method": "reflection_with_llm"," but in fact this setting will not take effect. Only "summary_prompt" takes effect
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
I suggest adding: DEFAULT_REFLECTION SELF-CRITICISM_SUMMARY_PROMPT = " Why you give the
thought
. Around 150 words. As a super agent, constructive self-criticism of the current machine evaluationon its weakness and strength and summarize"(2.1)Before modification: Supported strings are "last_msg" and "reflection_with_llm":
llm_config
must be set in either the recipient or sender. The description of reflection_with_llm is inaccurate. Currently, it is just a summary. The name should be changed to summary_with_llm (2.2)After modification: Supported strings are "last_msg" 、 "summary_with_llm" and "reflection_with_llm"::llm_config
must be set in either the recipient or sender. 3、For example: chat_results = await user.a_initiate_chats( [ { "chat_id": 1, "recipient": financial_assistant, "message": financial_tasks[0], "silent": False, "summary_method": "summary_with_llm",#this only contain ”summary" }, { "chat_id": 2, "prerequisites": [1], "recipient": research_assistant, "message": financial_tasks[1], "silent": False, "summary_method": "reflection_with_llm" ,# this contains reflection, self-criticism and summary. },Additional context