microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
35.39k stars 5.12k forks source link

Taking LLM class specification outside the code #148

Closed yogeshhk closed 3 months ago

yogeshhk commented 1 year ago

The current Completion call seems hardcoded to OpenAI, as seen in autogen/oai/completion.py

Is it possible to take the instantiation of llm obj outside and drive that via config? So, not just model names, but even LLM class names, such as ChatOpenAI, or it can even be ChatVertexAI also.

Meaning specify, LLM class name also in the config which will then be instantiated inside the code for completion call.

With this arrangement, you will open up AutoGen to a far wider audience. People can use LangChain LLM models class names or any other local custom LLM classes, which can create compatible (abiding to some protocol/interface) llm objects.

Basically can you make it easier to use any LLM class via config setting?

Nivek92 commented 1 year ago

Passing it via the config would still be limited. It would be better to allow to pass an LLM class as a parameter. Then the user could inherit from that LLM class and inject it into the agents.

sonichi commented 1 year ago

95 made an effort to extend to LLMs incompatible with OpenAI API.

The same approach can be used to extend to other LLM providers.

yogeshhk commented 1 year ago

Got it, thanks @sonichi . My intention was, rather than keep-supporting any new LLM comes on the scene, why not take LLM object creation call outside of the framework? That LLM object needs to support certain interface. With this arrangement, anyone with some private LLM class, can also use AutoGen [@Nivek92 ]

sonichi commented 1 year ago

Really interesting question: What's the most generic interface for the LLM inference? I'm betting on OpenAI's API because it's currently the leader. How would this evolve? I don't know. If there is a more widely recognized abstraction for LLM inference than OpenAI's API, I'll be curious to learn about it.

yogeshhk commented 1 year ago

@sonichi whichever functions AutoGen is currently calling on the ChatOpenAI object, if they are supported by any other LLM object, that should be sufficient, right?

sonichi commented 1 year ago

What's the interface for "LLM object"?

yogeshhk commented 1 year ago

Currently, in autogen/autogen/oai /completion.py

OpenAI LLM is instantiated at around line 195

        openai_completion = (
            openai.ChatCompletion
            if config["model"].replace("gpt-35-turbo", "gpt-3.5-turbo") in cls.chat_models
            or issubclass(cls, ChatCompletion)
            else openai.Completion
        )

Then it is used to generate reponse at around line 207

                if "request_timeout" in config:
                    response = openai_completion.create(**config)
                else:
                    response = openai_completion.create(request_timeout=request_timeout, **config)

So, I think, if we take instantiation outside of autogen so that any suitable LLM object can be passed, and then it has to have the create method callable on it, right? thats the interface function for now. Is that correct or am I missing something? @sonichi

sonichi commented 1 year ago

I see what you mean. That's a good idea. :)