microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.37k stars 4.57k forks source link

[Roadmap] Multimodal Orchestration #1975

Open BeibinLi opened 6 months ago

BeibinLi commented 6 months ago

[!TIP]

Want to get involved?

We'd love it if you did! Please get in contact with the people assigned to this issue, or leave a comment. See general contributing advice here too.

Integrating multimodal and language-only agents presents significant challenges, as few tools currently support seamless inter-agent communication. For example, when one agent generates an image (through DALLE or code-based figure creation), a separate GPT-3 agent may encounter difficulties interpreting the visual content, leading to errors or simplifications like converting images into a <image> tag.

To ensure smooth operation, users must carefully design the agent workflow to avoid unexpected issues, such as the Plot-Critic scenarios in both the GPT-4V and DALLE notebook. Hence, group chat, graph chat, nested chat, sequential chat, or many other pre-designed workflows could not work out-of-the-box for multimodal features.

Our goal is to enable seamless interaction between multimodal and text-only agents, making it possible to include them in conversational agents regardless of their connection to multimodal models (llm_config).

Currently, the MultimodalConversableAgent specifically processes input message content to interpret images and format messages prior to client communication. However, this approach can complicate orchestration with other agents. For instance, the GroupChatManager lacks visual processing capabilities and thus cannot properly distribute tasks, or a Coder agent fails to read image and could not write matplotlib code.

The problem becomes more severe if we enable image generation, audio, OCR, and video (in the future).

Common Issues [Example]

Issue with group chat: #1838 Fix for wrong handling of message: #2118

Things to consider:


Current issues and solutions

Issue #2142 Quick Fix to resolve the issue:


Important Multimodal Updates

We suggest three major changes, categorized by their focus on accuracy and efficiency. Then, we have a few future changes to image generation, audio processing, and OCR capabilities.

Major Update 0: Message Format

Major Update 1: Full Vision Support inside Client

Major Update 2: Efficient Vision Capability

Why two different ways to enable the "vision" feature? Answer: We propose two distinct approaches for enabling "vision" to satisfy different requirements. The [Update 1], offering comprehensive multimodal support, allows all agents within a workflow to use multimodal client, ensuring no information loss but at a higher computational cost. The [Update 2], focusing on efficient vision capability, transcribes images into text captions, enabling broader application with text-only models at reduced costs but with potential information loss. This dual strategy provides flexibility, allowing users to choose the optimal balance between accuracy and efficiency based on their specific needs and resources.

Update 3: Image Generation Capability

Update 4: Audio Capabilities


Update 5: OCR Capability (within VisionCapability)

Update 6: Coordinate Detection Capability (within VisionCapability)


Additional context

@schauppi created an useful MM WebAgent for AutoGen.

Many other contributors also have great insights, and please feel free to comment below.

rickyloynd-microsoft commented 6 months ago
  • Limitation: if the agent does not generate images very often in its conversation, the API calls made in the text analyzer would be costly.

Just to verify, the text analyzer won't exist, and this cost won't be paid, unless the image generation capability is actually added to the agent.

WaelKarkoub commented 6 months ago

Implementation: Transferring the multimodal message processing feature from the "MultimodalConversableAgent" to the class OpenAIWrapper. This involves adapting messages to a multimodal format based on the configuration.

IIRC, if you send a message that has content of images to an LLM that doesn't support image ingestion, the API request will fail (for OpenAI at least). So in the implementation, OpenAIWrapper must be aware of what modalities it can accept to generate the right message to send.

Each Agent could have a property where it lists out all the modalities it can accept, and must be mutable (in case we add custom capabilities).

class Agent(Protocol)
    @property
    def modalities(self) -> List[str]:
        """The modalities the LLM can accept/understand etc.."""
        ...

    def add_modality(self, modal: str) -> None:
        ...
class ConversableAgent(LLMAgent):
    def __init__(self, ...):
        ...
        self._modalities = ["text"]

    @property
    def modalities(self) -> List[str]:
        return self._modalities

    def add_modality(self, modal: str) -> None:
        self._modalities.append(modal)

However, this is a breaking change because the ModelClient interface now has to accept the list of modalities as an input to the create method

class ModelClient(Protocol):
    ...
    def create(self, params: Dict[str, Any], modalities: List[str]) ->  ModelClientResponseProtocol:
        ...

Probably there are better ways out there, but this is what I thought about doing recently

BeibinLi commented 6 months ago

@WaelKarkoub Thanks for your suggestions! Very helpful~

I am thinking about adding it to the Client, with some parameters in config_list. Let me try to create a PR and will tag you!!!

BeibinLi commented 6 months ago

Feel free to give any suggestions!

@tomorrmato @deerleo @Josephrp @antoan @ViperVille007 @scortLi @ibishara

BeibinLi commented 6 months ago

@awssenera

krowflow commented 4 months ago

Come on Autogen we can't let Crew ai or Lang chain take over. Lets use chain of thought. We started this multi agent ish and this is the thanks we get. Lets go Full desktop full futuristic GUI Application call it "Autogen Studio X" created for advanced Genx users Lets stop playing around at the backdoor and just drop through the roof with this multi conversational, multi modal agent to agent invasion... Give the people what they want. "Slay the Sequence"