Open Neethan54 opened 3 months ago
Hey @Neethan54! 👋 I'm here to help you out with any bugs, questions, or contributions you might have. Let's tackle this together!
To use Azure OpenAI models (3.5 and 4.0) inside a custom model as a mixture of agents, follow these steps:
Install LlamaIndex:
!pip install llama-index
Set up your Azure environment:
Configure environment variables:
import os
os.environ["AZURE_OPENAI_API_KEY"] = "<your-api-key>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<your-resource-name>.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
Initialize the Azure OpenAI models:
from llama_index.multi_modal_llms.azure_openai import AzureOpenAIMultiModal
# Initialize model 3.5
llm_35 = AzureOpenAIMultiModal(
engine="deployment-name-35", model="gpt-35-turbo-16k", temperature=0.0
)
# Initialize model 4.0
llm_40 = AzureOpenAIMultiModal(
engine="deployment-name-40", model="gpt-4", temperature=0.0
)
Use the models as a mixture of agents: You can now use these models for different tasks or combine their outputs as needed. For example:
# Using model 3.5 for a completion task
response_35 = llm_35.complete("The sky is a beautiful blue and")
print("Model 3.5 response:", response_35)
# Using model 4.0 for a chat task
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(role="system", content="You are a pirate with colorful personality."),
ChatMessage(role="user", content="Hello"),
]
response_40 = llm_40.chat(messages)
print("Model 4.0 response:", response_40)
By following these steps, you can effectively use Azure OpenAI models 3.5 and 4.0 inside a custom model as a mixture of agents [1][2][3][4].
To continue talking to Dosu, mention @dosu.
@Neethan54 you can follow the guide I linked
You can implement mixture of agents inside a custom llm. That custom llm class can use multiple openai instances. You can even just run the mixture of agents pack inside of the custom llm if you don't want to reimplement the logic
from typing import Optional, List, Mapping, Any
from llama_index.core import SimpleDirectoryReader, SummaryIndex
from llama_index.core.callbacks import CallbackManager
from llama_index.core.llms import (
CustomLLM,
CompletionResponse,
CompletionResponseGen,
LLMMetadata,
)
from llama_index.core.llms.callbacks import llm_completion_callback
from llama_index.core import Settings
class OurLLM(CustomLLM):
context_window: int = 32000
num_output: int = 512
model_name: str = "custom"
mixture_of_agents = MixtureOfAgentsPack(...)
@property
def metadata(self) -> LLMMetadata:
"""Get LLM metadata."""
return LLMMetadata(
context_window=self.context_window,
num_output=self.num_output,
model_name=self.model_name,
)
@llm_completion_callback()
def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
response = self.mixture_of_agents.run(prompt)
return CompletionResponse(text=response)
@llm_completion_callback()
def stream_complete(
self, prompt: str, **kwargs: Any
) -> CompletionResponseGen:
raise NotImplementedError
Hi @logan-markewich , Thanks for helping , But constantly im getting "Event loop is closed" error from the react agent, when i was customllm as llm. How can i solve this? Thanks in advance
I have Azure openai models (3.5 and 4o), how can i use these models inside the Custom model as Mixture of agents?
_Originally posted by @Neethan54 in https://github.com/run-llama/llama_index/issues/14962#issuecomment-2252303398_