Closed Archer-Thane closed 2 months ago
One configuration related to this is the system_message
in AssistantAgent
. A few examples:
Automated Task Solving by Group Chat (with 3 group member agents and 1 manager agent)
Automated Data Visualization by Group Chat (with 3 group member agents and 1 manager agent)
Automated Complex Task Solving by Group Chat (with 6 group member agents and 1 manager agent)
Dear @sonichi,
Upon revisiting the examples you shared, it's evident that there's a discernible gap in character specification. As engineers, we prioritize outcomes and behaviors. For instance, consider the "Scientist" profile from one of the links:
scientist = autogen.AssistantAgent(
name="Scientist",
llm_config=gpt4_config,
system_message="""Scientist. You adhere to an approved plan. You can classify papers based on their abstracts. You don't code."""
)
The scientist's range of interests can be vast, and paper categorization can be approached in numerous ways. It's worth noting that humans possess diverse categorization techniques. If two individuals were tasked with the same categorization, their methodologies and results could vary, reflecting their distinct styles and viewpoints.
While the examples rooted in a scientific and engineering context are explicit, fields like psychotherapy present a different scenario. Here, outcomes are significantly swayed by individual beliefs and personalities. To illustrate, an introverted person diagnosed with depression might manifest behaviors distinct from an extroverted individual with the same condition.
Moreover, envision the creation of an AI-driven fitness coach application. Beyond the imperative of being polite and encouraging, the AI coach must discern the trainee's personality. This insight would enable the AI to customize the fitness program more effectively, ensuring it aligns with the trainee's unique needs.
Additionally, the potential of AI to emulate specific situations in a safe setting is noteworthy. This capability facilitates the education of individuals on optimal strategies, mirroring techniques employed in Cognitive Behavioral Therapy (CBT).
It's crucial to underscore the profound influence of character on both ends. The character plays a pivotal role that demands serious consideration.
To sum up, defining character for AI isn't merely advantageous—it's indispensable. I'm convinced that this domain merits deeper exploration.
One thing that I would add is that agent definition typically involves few shot prompts to help narrow down the agent to the right responses. system_message
is great but on it's own it doesn't allow you to provide the synthetic conversation turns that would give you few shot prompting on a chat completion tuned model.
We are closing this issue due to inactivity; please reopen if the problem persists.
Is there a mechanism in place that allows for the customization of agents to display distinct character attributes? I've noticed that many of these digital agents are primarily designed to assist users, always ensuring they respond with a demeanor of kindness and politeness. However, I see a broader application for them beyond mere assistance. I believe these agents can be pivotal tools in the fields of social training and psychotherapeutic education. Imagine an agent that's been tailored to accurately emulate the behavior and responses of a depressed individual. Such an agent could serve as a valuable resource in a controlled and safe setting, allowing psychology students and budding therapists to engage with it. This interaction could help them hone their communication and empathetic skills, better preparing them for real-life scenarios. Is there a framework or feature within this platform that facilitates such advanced configurations?