langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
89.3k stars 14.08k forks source link

Keep getting AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'} #17373

Open lenka1 opened 5 months ago

lenka1 commented 5 months ago

Checked other resources

Example Code

import os from langchain.schema import SystemMessage, HumanMessage from langchain_openai import AzureChatOpenAI from langchain.callbacks import get_openai_callback azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY")

Create an instance of chat llm

llm = AzureChatOpenAI( azure_endpoint=azure_endpoint, api_key = AZURE_OPENAI_API_KEY, api_version="2023-05-15", azure_deployment="gpt-3.5-turbo", model="gpt-3.5-turbo", )

messages = [ SystemMessage( content=( "You are ExpertGPT, an AGI system capable of " "anything except answering questions about cheese. " "It turns out that AGI does not fathom cheese as a " "concept, the reason for this is a mystery." ) ), HumanMessage(content="Tell me about parmigiano, the Italian cheese!") ]

with get_openai_callback() as cb:

res = llm.invoke(messages)

# print the response
print(res.content)

# print the total tokens used
print(cb.total_tokens)

Error Message and Stack Trace (if applicable)

HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "HTTP/1.1 401 Unauthorized" DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'POST']> receive_response_body.started request=<Request [b'POST']> DEBUG:httpcore.http11:receive_response_body.complete receive_response_body.complete DEBUG:httpcore.http11:response_closed.started response_closed.started DEBUG:httpcore.http11:response_closed.complete response_closed.complete DEBUG:openai._base_client:HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "401 Unauthorized" HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "401 Unauthorized" DEBUG:openai._base_client:Encountered httpx.HTTPStatusError Traceback (most recent call last): File "/home/mlakka/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request response.raise_for_status() File "/home/mlakka/.local/lib/python3.10/site-packages/httpx/_models.py", line 759, in raise_for_status raise HTTPStatusError(message, request=request, response=self) httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 Encountered httpx.HTTPStatusError Traceback (most recent call last): File "/home/mlakka/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request response.raise_for_status() File "/home/mlakka/.local/lib/python3.10/site-packages/httpx/_models.py", line 759, in raise_for_status raise HTTPStatusError(message, request=request, response=self) httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 DEBUG:openai._base_client:Not retrying Not retrying DEBUG:openai._base_client:Re-raising status error Re-raising status error

Error is coming from here

Description

My key works for other calls but with langchain is not working and giving the above error. Please help. By the way, I am using Azure open ai

AuthenticationError Traceback (most recent call last) Cell In[6], line 32 18 messages = [ 19 SystemMessage( 20 content=( (...) 27 HumanMessage(content="Tell me about parmigiano, the Italian cheese!") 28 ] 30 with get_openai_callback() as cb: ---> 32 res = llm.invoke(messages) 34 # print the response 35 print(res.content)

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:166, in BaseChatModel.invoke(self, input, config, stop, kwargs) 155 def invoke( 156 self, 157 input: LanguageModelInput, (...) 161 kwargs: Any, 162 ) -> BaseMessage: 163 config = ensure_config(config) 164 return cast( 165 ChatGeneration, --> 166 self.generate_prompt( 167 [self._convert_input(input)], 168 stop=stop, 169 callbacks=config.get("callbacks"), 170 tags=config.get("tags"), 171 metadata=config.get("metadata"), 172 run_name=config.get("run_name"), 173 **kwargs, 174 ).generations[0][0], 175 ).message

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:544, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, kwargs) 536 def generate_prompt( 537 self, 538 prompts: List[PromptValue], (...) 541 kwargs: Any, 542 ) -> LLMResult: 543 prompt_messages = [p.to_messages() for p in prompts] --> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs) 406 if run_managers: 407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[])) --> 408 raise e 409 flattened_outputs = [ 410 LLMResult(generations=[res.generations], llm_output=res.llm_output) 411 for res in results 412 ] 413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, kwargs) 395 for i, m in enumerate(messages): 396 try: 397 results.append( --> 398 self._generate_with_cache( 399 m, 400 stop=stop, 401 run_manager=run_managers[i] if run_managers else None, 402 kwargs, 403 ) 404 ) 405 except BaseException as e: 406 if run_managers:

File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, kwargs) 573 raise ValueError( 574 "Asked to cache, but no cache found at langchain.cache." 575 ) 576 if new_arg_supported: --> 577 return self._generate( 578 messages, stop=stop, run_manager=run_manager, kwargs 579 ) 580 else: 581 return self._generate(messages, stop=stop, **kwargs)

File ~/.local/lib/python3.10/site-packages/langchain_openai/chat_models/base.py:451, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, kwargs) 445 message_dicts, params = self._create_message_dicts(messages, stop) 446 params = { 447 params, 448 ({"stream": stream} if stream is not None else {}), 449 kwargs, 450 } --> 451 response = self.client.create(messages=message_dicts, **params) 452 return self._create_chat_result(response)

File ~/.local/lib/python3.10/site-packages/openai/_utils/_utils.py:275, in required_args..inner..wrapper(*args, *kwargs) 273 msg = f"Missing required argument: {quote(missing[0])}" 274 raise TypeError(msg) --> 275 return func(args, **kwargs)

File ~/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py:663, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout) 611 @required_args(["messages", "model"], ["messages", "model", "stream"]) 612 def create( 613 self, (...) 661 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, 662 ) -> ChatCompletion | Stream[ChatCompletionChunk]: --> 663 return self._post( 664 "/chat/completions", 665 body=maybe_transform( 666 { 667 "messages": messages, 668 "model": model, 669 "frequency_penalty": frequency_penalty, 670 "function_call": function_call, 671 "functions": functions, 672 "logit_bias": logit_bias, 673 "logprobs": logprobs, 674 "max_tokens": max_tokens, 675 "n": n, 676 "presence_penalty": presence_penalty, 677 "response_format": response_format, 678 "seed": seed, 679 "stop": stop, 680 "stream": stream, 681 "temperature": temperature, 682 "tool_choice": tool_choice, 683 "tools": tools, 684 "top_logprobs": top_logprobs, 685 "top_p": top_p, 686 "user": user, 687 }, 688 completion_create_params.CompletionCreateParams, 689 ), 690 options=make_request_options( 691 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 692 ), 693 cast_to=ChatCompletion, 694 stream=stream or False, 695 stream_cls=Stream[ChatCompletionChunk], 696 )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls) 1186 def post( 1187 self, 1188 path: str, (...) 1195 stream_cls: type[_StreamT] | None = None, 1196 ) -> ResponseT | _StreamT: 1197 opts = FinalRequestOptions.construct( 1198 method="post", url=path, json_data=body, files=to_httpx_files(files), **options 1199 ) -> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls) 880 def request( 881 self, 882 cast_to: Type[ResponseT], (...) 887 stream_cls: type[_StreamT] | None = None, 888 ) -> ResponseT | _StreamT: --> 889 return self._request( 890 cast_to=cast_to, 891 options=options, 892 stream=stream, 893 stream_cls=stream_cls, 894 remaining_retries=remaining_retries, 895 )

File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls) 977 err.response.read() 979 log.debug("Re-raising status error") --> 980 raise self._make_status_error_from_response(err.response) from None 982 return self._process_response( 983 cast_to=cast_to, 984 options=options, (...) 987 stream_cls=stream_cls, 988 )

AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com/), or have expired.'}

System Info

openai==1.12.0 langchain==0.1.6 langchain-community==0.0.19 langchain-core==0.1.22 langchain-openai==0.0.5

dosubot[bot] commented 5 months ago

🤖

Hey @lenka1, great to see you back! Hope everything's been going well since our last chat.

Based on the error message you provided, it seems like there might be an issue with the authentication process when using the LangChain library with Azure OpenAI. The error code 401 indicates an authentication error, which could be due to a number of reasons. Here are a few possibilities:

  1. Incorrect API Key: The openai_api_key might be incorrect. This key is fetched from the environment variable AZURE_OPENAI_API_KEY or OPENAI_API_KEY. Please ensure that the correct key is set in the environment variable.

  2. Incorrect Azure Active Directory Token: The azure_ad_token might be incorrect. This token is fetched from the environment variable AZURE_OPENAI_AD_TOKEN. Please ensure that the correct token is set in the environment variable.

  3. Incorrect Azure Endpoint: The azure_endpoint might be incorrect. This endpoint is fetched from the environment variable AZURE_OPENAI_ENDPOINT. Please ensure that the correct endpoint is set in the environment variable.

  4. Incorrect Deployment: If you are using a specific deployment, ensure that the deployment parameter is correctly set.

  5. Incorrect Base URL: If you are using a custom base URL, ensure that the openai_api_base parameter is correctly set. As per the code, if deployment is specified then openai_api_base should not be. Instead use deployment and azure_endpoint.

Please verify these parameters and try again. If the issue persists, please provide more details for further assistance.

The LangChain library handles authentication with Azure OpenAI through the use of API keys and Azure Active Directory tokens. These can be provided either directly as parameters to the AzureOpenAIEmbeddings class or through environment variables. The API key can be set through the openai_api_key parameter or the AZURE_OPENAI_API_KEY environment variable. The Azure Active Directory token can be set through the azure_ad_token parameter or the AZURE_OPENAI_AD_TOKEN environment variable.

In the validate_environment method, the library first checks if the openai_api_key and azure_ad_token values are provided. If not, it tries to get these values from the environment variables AZURE_OPENAI_API_KEY and AZURE_OPENAI_AD_TOKEN respectively. If the environment variables are also not set, the values remain None.

These values are then used to create an instance of the openai.AzureOpenAI class, which is used to interact with the Azure OpenAI API.

You can find more details in the LangChain repository.

If you have any other questions or need further clarification, feel free to ask.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
sagardhakate commented 4 months ago

I am also facing same issue.

After certain number of prompts, chatbot is giving same error. For this issue, we are restarting the application as a temporary solution.

Could you please provide programmatic solution for the same?

HimanshuParmarMoodys commented 3 months ago

Hi @sagardhakate and @lenka1 all you need to do is to make sure you set the environment variables with following names.. it would work fine..

os.environ["AZURE_OPENAI_API_KEY"] = "your auzre api key " os.environ["AZURE_OPENAI_ENDPOINT"] = "your end point url "

younes-io commented 3 months ago

@HimanshuParmarMoodys : your solution doesn't support the other authentication mechanism on Azure : using Azure AD

KeyanatGiggso commented 4 weeks ago

Check your API Keys and Endpoint. I've found after researching every stuff. My token was not correct! Version is not the problem mostly, instead credentials are. So pls check yours.