Open sam-h-long opened 1 month ago
I found a similar issue that was resolved by integrating the LlamaIndexLLMWrapper
into the code. Here’s how you can resolve the AttributeError: 'AzureOpenAI' object has no attribute 'set_run_config'
by using the LlamaIndexLLMWrapper
:
Wrap your AzureOpenAI instance with LlamaIndexLLMWrapper
:
from ragas.llms.base import LlamaIndexLLMWrapper, RunConfig
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from llama_index.llms.azure_openai import AzureOpenAI
gpt_35 = AzureOpenAI(
model="gpt-35-turbo",
engine="gpt-35-turbo",
temperature=0.1,
max_tokens=None,
api_version="2024-06-01",
azure_endpoint="https://XXXXXXX",
azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"),
use_azure_ad=True
)
# Create a run configuration
run_config = RunConfig(timeout=30)
# Wrap the AzureOpenAI instance with LlamaIndexLLMWrapper
llama_index_llm = LlamaIndexLLMWrapper(llm=gpt_35, run_config=run_config)
data_samples = {
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts': [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
['The Green Bay Packers...Green Bay, Wisconsin.', 'The Packers compete...Football Conference']],
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
dataset = Dataset.from_dict(data_samples)
score = evaluate(dataset, metrics=[context_recall], llm=llama_index_llm)
score.to_pandas()
By wrapping your AzureOpenAI
instance with LlamaIndexLLMWrapper
, you ensure that the set_run_config
method is available, thus resolving the AttributeError
[1][2].
To continue talking to Dosu, mention @dosu.
I tried wrapping my AzureOpenAI
from LLamaIndex
in the Ragas wrapper (LlamaIndexLLMWrapper
) but I still get the same error....
from ragas.llms.base import LlamaIndexLLMWrapper, RunConfig
# Create a run configuration
run_config = RunConfig(timeout=30)
# Wrap the AzureOpenAI instance with LlamaIndexLLMWrapper
llama_index_llm = LlamaIndexLLMWrapper(llm=gpt_35, run_config=run_config)
data_samples = {
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts': [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
['The Green Bay Packers...Green Bay, Wisconsin.', 'The Packers compete...Football Conference']],
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
dataset = Dataset.from_dict(data_samples)
score = evaluate(dataset, metrics=[context_recall], llm=llama_index_llm)
score.to_pandas()
Error:
AttributeError: 'AzureOpenAI' object has no attribute 'set_run_config'
To resolve the AttributeError: 'AzureOpenAI' object has no attribute 'set_run_config'
when wrapping your AzureOpenAI
instance from LLamaIndex
in the LlamaIndexLLMWrapper
, you need to ensure that the LlamaIndexLLMWrapper
class includes the set_run_config
method. Here is the complete implementation of the LlamaIndexLLMWrapper
class with the set_run_config
method:
class LlamaIndexLLMWrapper(BaseRagasLLM):
"""
An Adaptor for LlamaIndex LLMs
"""
def __init__(
self,
llm: BaseLLM,
run_config: t.Optional[RunConfig] = None,
):
self.llm = llm
self._signature = ""
if type(self.llm).__name__.lower() == "bedrock":
self._signature = "bedrock"
if run_config is None:
run_config = RunConfig()
self.set_run_config(run_config)
def set_run_config(self, run_config: RunConfig):
self.run_config = run_config
def check_args(
self,
n: int,
temperature: float,
stop: t.Optional[t.List[str]],
callbacks: Callbacks,
) -> dict[str, t.Any]:
if n != 1:
logger.warning("n values greater than 1 not support for LlamaIndex LLMs")
if temperature != 1e-8:
logger.info("temperature kwarg passed to LlamaIndex LLM")
if stop is not None:
logger.info("stop kwarg passed to LlamaIndex LLM")
if callbacks is not None:
logger.info(
"callbacks not supported for LlamaIndex LLMs, ignoring callbacks"
)
if self._signature == "bedrock":
return {"temperature": temperature}
else:
return {
"n": n,
"temperature": temperature,
"stop": stop,
}
def generate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: float = 1e-8,
stop: t.Optional[t.List[str]] = None,
callbacks: Callbacks = None,
) -> LLMResult:
kwargs = self.check_args(n, temperature, stop, callbacks)
li_response = self.llm.complete(prompt.to_string(), **kwargs)
return LLMResult(generations=[[Generation(text=li_response.text)]])
async def agenerate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: t.Optional[float] = None,
stop: t.Optional[t.List[str]] = None,
callbacks: Callbacks = None,
) -> LLMResult:
if temperature is None:
temperature = 1e-8
kwargs = self.check_args(n, temperature, stop, callbacks)
li_response = await self.llm.acomplete(prompt.to_string(), **kwargs)
return LLMResult(generations=[[Generation(text=li_response.text)]])
Make sure that your LlamaIndexLLMWrapper
class includes the set_run_config
method as shown above. This should resolve the AttributeError
[1].
To continue talking to Dosu, mention @dosu.
I will try this @dosu but this code is dependent on a lot of package imports.
In general, is there a reason the existing class LlamaIndexLLMWrapper(BaseRagasLLM)
link in the Ragas repo does NOT have the set_run_config()
function implemented within the class
Contrasting to class LangchainLLMWrapper(BaseRagasLLM)
which has the set_run_config()
function implemented
The LlamaIndexLLMWrapper
class in the Ragas repository does have the set_run_config()
function implemented. Here is the relevant part of the code:
class LlamaIndexLLMWrapper(BaseRagasLLM):
"""
An Adaptor for LlamaIndex LLMs
"""
def __init__(
self,
llm: BaseLLM,
run_config: t.Optional[RunConfig] = None,
):
self.llm = llm
self._signature = ""
if type(self.llm).__name__.lower() == "bedrock":
self._signature = "bedrock"
if run_config is None:
run_config = RunConfig()
self.set_run_config(run_config)
def set_run_config(self, run_config: RunConfig):
self.run_config = run_config
The set_run_config()
function is called in the constructor of the LlamaIndexLLMWrapper
class, similar to the LangchainLLMWrapper
class. Therefore, the LlamaIndexLLMWrapper
class does have the set_run_config()
function implemented [1].
To continue talking to Dosu, mention @dosu.
I see that now but either way I am still getting 😢 :
AttributeError: 'AzureOpenAI' object has no attribute 'set_run_config'
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug Further request for LLamaIndex support regarding Azure OpenAI models (
from llama_index.llms.azure_openai import AzureOpenAI
). Getting errorAttributeError: 'AzureOpenAI' object has no attribute 'set_run_config'
My guess at a solution is to use LlamaIndexLLMWrapper. But unlike Langchain this wrapper is not integrated in
evaluation.py
.Ragas version:
0.1.16
Python version:3.10.14
Code to Reproduce
Error trace
Expected behavior Expecting the code to run like in issue #1277
Additional context I see that in evaluation.py there is integrated support for Langchain. This kind of support would be nice for LLamaIndex as well.