Open llvll0hsen opened 4 days ago
The error you're encountering suggests that the API key being used by the generate_with_langchain_docs
method is incorrect or not being passed correctly. Here are a few things you can check and try:
API Key Configuration: Ensure that the API key is correctly set in your environment or configuration. The LangchainLLMWrapper
and AzureChatOpenAI
should be using the same API key. Double-check that the API key is correctly configured in your environment variables or wherever it is being set.
Environment Variables: If you're using environment variables to store your API key, make sure they are correctly set and accessible in the context where generate_with_langchain_docs
is being called.
Wrapper Initialization: When you wrap AzureChatOpenAI
with LangchainLLMWrapper
, ensure that the wrapper is correctly passing the API key to the underlying model. There might be a discrepancy in how the wrapper handles the API key compared to direct invocation.
Version Compatibility: Although there are no known issues between ragas==0.2.1
and langchain_openai==0.2.2
, ensure that all dependencies are up to date and compatible. Sometimes, updating to the latest versions can resolve unexpected issues.
Debugging: Add debugging logs to check if the API key is being altered or not passed correctly when using generate_with_langchain_docs
. This can help identify where the issue might be occurring.
If these steps do not resolve the issue, consider isolating the problem by testing the API key with a minimal setup to ensure it works as expected in different parts of your code.
To continue talking to Dosu, mention @dosu.
I've already check the api key by calling the AzureChatOpenAI.invoke
method and it works just fine.
@llvll0hsen I have exactly the same issue! It seems that the problem lies in the llm_factory
. It falls back on ChatOpenAI
, instead of AzureChatOpenAI
. The docs here https://docs.ragas.io/en/stable/howtos/customizations/customize_models/#customize-models give an overview, but it still fails. @jjmachan can we expect a hotfix?
Same here. The LLM on Azure works by itself, but not when it is embedded in TestsetGenerator, instead of using Azure API key, it uses OpenAI API key.
hey let me look into this at this ASAP and I will get back
hey let me look into this at this ASAP and I will get back
Thanks! BTW, it is the same for embedding_factory. It would be nice to pass the llm and embedding model through the TestsetGenerator parameter and actually use them in the default transforms.
I was having the same error. It seems like the llm_factory defaults to OpenAI and there's no support in that function for AzureOpenAI. I altered the code locally to try to see if it would accept an AzureOpenAI client but no luck. In the previous version (0.1.21) you used to be able to pass your LLM and embedding client directly to TestsetGenerator, but that seems to have changed in the latest version.
Are there any updates on this?
I figured out the problem. I implemented a temporary fix locally just to have the ability to produce a test set, but it is not a long-term solution. I am willing to spend some time working on a fix if nobody else is.
I am trying to use Azure Openai to create a test dataset. But for some reasons the
generate_with_langchain_docs
give the following error:unable to apply transformation: Error code: 401 - {'error': {'message': 'Incorrect API key provided: 0ba2ec70********************407d. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
when I test my api key usingazure_llm.invoke("tell me a jok about weather")
it work.here is my code
Ragas=0.2.1
andlangchain_openai==0.2.2