explodinggradients / ragas

Supercharge Your LLM Application Evaluations 🚀
https://docs.ragas.io
Apache License 2.0
7.23k stars 740 forks source link

Error while using azure chat open ai #1251

Open cjtejasai opened 2 months ago

cjtejasai commented 2 months ago

[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug When we are using azurechatopenai method we are getting some issues, as have and addtional wrapper on top of azureopenai function call, but due to that we inside self.llm.set_run_config(run_config) is expecting values set_run_config which no matter what code i change on wapper side i could run

Ragas version:0.1.6 Python version:3.13

Code to Reproduce Share code to reproduce the issue

Error trace

Expected behavior A clear and concise description of what you expected to happen.

Additional context Add any other context about the problem here.

dosubot[bot] commented 2 months ago

Hey @cjtejasai! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together!

I found a similar closed issue that might help you resolve your problem:

  1. AttributeError: 'AzureChatOpenAI' object has no attribute 'set_run_config' - The solution involved removing code that incorrectly patched models in all the metrics.

For your specific issue with the azurechatopenai method in Ragas version 0.1.6, you need to ensure that the set_run_config method is correctly setting the run_config for the AzureChatOpenAI instance. Here is a relevant code snippet:

def set_run_config(self, run_config: RunConfig):
    self.run_config = run_config

    # run configurations specially for OpenAI
    if isinstance(self.embeddings, OpenAIEmbeddings):
        try:
            from openai import RateLimitError
        except ImportError:
            raise ImportError(
                "openai.error.RateLimitError not found. Please install openai package as `pip install openai`"
            )
        self.embeddings.request_timeout = run_config.timeout
        self.run_config.exception_types = RateLimitError

Ensure that your AzureChatOpenAI instance is correctly passed to the LangchainEmbeddingsWrapper and that the run_config is properly set. Here is an example:

from langchain_openai.chat_models import AzureChatOpenAI
from ragas.run_config import RunConfig
from your_module import LangchainEmbeddingsWrapper  # Adjust the import as necessary

# Create an instance of AzureChatOpenAI
azure_chat_openai = AzureChatOpenAI(api_key="your_api_key", endpoint="your_endpoint")

# Create a RunConfig instance
run_config = RunConfig(timeout=30)  # Set your desired timeout

# Wrap the AzureChatOpenAI instance with LangchainEmbeddingsWrapper
llm_wrapper = LangchainEmbeddingsWrapper(embeddings=azure_chat_openai, run_config=run_config)

# Set the run configuration
llm_wrapper.set_run_config(run_config)

This should ensure that the run_config is correctly applied to the AzureChatOpenAI instance, resolving the issue [1][2].

To continue talking to Dosu, mention @dosu.

jjmachan commented 2 months ago

@cjtejasai did it help?