daveshap / OpenAI_Agent_Swarm

HAAS = Hierarchical Autonomous Agent Swarm - "Resistance is futile!"
MIT License
2.98k stars 380 forks source link

BOUNTY: Test various inter-agent communication strategies #120

Closed daveshap closed 10 months ago

daveshap commented 10 months ago

There have been several conversations around communication theory, tech stack, and layered strategies. I'd like to see some folks do some experiments around these to identify what works and what doesn't.

Overall, the PRIMARY goal here is to start surfacing general principles to optimize communication to minimize noise and maximize signals. What I mean by general principles are:

agniiva commented 10 months ago

https://only-bots.ai

daveshap commented 10 months ago

Please provide more context with all posts @agniiva

marktellez commented 10 months ago

Howdy! Love your videos. We are working along the same lines of research, and since I've used up my OpenAI credits provided by sponsors this month, I thought it would be great to collaborate with you.

I've been successfully experimenting with OpenAI and multiple agents, managing their interactions through a slim layer of Python code and OpenAI API calls. This approach has uncovered some valuable patterns, and I'm open to documenting them for your project.

I'd be more than willing to delve deeper into these ideas, perhaps in a wiki or a discussion forum on your platform. For now, let me share some insights here:

A key concept I've developed is the pairing of every agent with a "Critic" agent. Think of the Critic as an agent's 'conscience', engaging in an internal dialogue to reach a consensus before externalizing the communication to another agent with a different role.

Critic The Critic agent plays a vital role and exhibits some consistent behaviors across all agents:

Instruct the agent to optimize token usage, maintaining context with minimal verbosity. Cross-verify agent findings with external sources and accurately cite these findings. Maintain the agent's focus on the assigned topic and task, regularly reminding it of its role. Identify and prevent the 'polite loop' often encountered at the end of conversations.

let's break down each bullet point to understand the specific LLM (Large Language Model) problem it addresses:

Optimize Token Usage, Maintaining Context with Minimal Verbosity:

LLM Problem Addressed: This tackles the challenge of verbosity and redundancy in LLM responses. LLMs, especially when unconstrained, can produce lengthy responses that use more computational resources (tokens) than necessary, potentially leading to inefficiency and higher costs. Solution Offered: By instructing the agent to use fewer tokens without losing context, it ensures concise and efficient communication, optimizing computational resources while maintaining the quality and relevance of the response. Cross-Verify Agent Findings with External Sources and Accurately Cite These Findings:

LLM Problem Addressed: LLMs can sometimes generate responses based on outdated, incorrect, or incomplete information, as they rely on pre-existing data up to their last training cut-off. Solution Offered: By double-checking the findings with current external sources and citing them, the Critic agent adds a layer of validation and currentness to the information provided by the LLM, enhancing its reliability and factual accuracy. Maintain the Agent's Focus on the Assigned Topic and Task, Regularly Reminding It of Its Role:

LLM Problem Addressed: LLMs can drift off-topic or lose sight of the original task or question, especially in longer or more complex dialogues. Solution Offered: Regularly reminding the agent of its role and the task at hand helps keep the conversation focused and relevant, ensuring that the LLM stays on track and delivers pertinent and goal-oriented responses. Identify and Prevent the 'Polite Loop' Often Encountered at the End of Conversations:

LLM Problem Addressed: LLMs can sometimes enter a cycle of repetitive or redundant politeness, especially towards the end of a conversation, where they keep the interaction going unnecessarily. Solution Offered: The Critic agent can detect when a conversation is naturally concluding and prevent the LLM from engaging in unnecessary prolongation, thereby streamlining the interaction and respecting the user's time.

These are just a few highlights. I'm excited to explore more and look forward to your thoughts on this collaboration!