Closed namgizlat closed 1 year ago
Answer generated by a 🤖
I found related issues that might be helpful. I did my best to summarize the solutions, but I recommend looking at the related issues yourself.
The suggested solution is:
It seems that the LangChain's AzureOpenAI class does not support the deployment_id
parameter out of the box. You can create a custom class that inherits from AzureOpenAI
and overrides the necessary methods to support the deployment_id
parameter. Here's an example of how you can do that:
from langchain.llms import AzureOpenAI
from langchain.schema import HumanMessage
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_API_BASE"] = "https://eastus.api.cognitive.microsoft.com/"
os.environ["OPENAI_API_KEY"] = "MYAPIKEYHERE"
class CustomAzureOpenAI(AzureOpenAI):
def __init__(self, model_name: str, deployment_id: str, **kwargs):
super().__init__(model_name=model_name, **kwargs)
self.deployment_id = deployment_id
def __call__(self, *args: Any, **kwargs: Any) -> ChatResult:
kwargs["deployment_id"] = self.deployment_id
return super().__call__(*args, **kwargs)
llm = CustomAzureOpenAI(
model_name="gpt-3.5",
deployment_id="gpt35"
)
response = llm(
[
HumanMessage(content="Tell me a joke")
]
)
print(response[0].content)
This should resolve the issue and allow you to use the deployment_id
with LangChain's AzureOpenAI.
No discussion on issue
The suggested solution is:
Check the openai_api_base
URL and ensure that it is correctly formatted and points to the correct Azure resource. Verify the deployment_name
to ensure it matches exactly with the one set up in Azure. If the deployment was just created, it might take a few minutes for it to be ready, so waiting for a few minutes before trying again is advised.
Here's the corrected code snippet:
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://<your-resource-name>.openai.azure.com"
API_KEY = "<your-api-key>"
DEPLOYMENT_NAME = "<your-deployment-name>"
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type="azure",
)
print(model(
[
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
))
Replace <your-resource-name>
, <your-api-key>
, and <your-deployment-name>
with the actual Azure resource name, API key, and deployment name respectively.
Alternatively, these parameters can be set as environment variables. If preferred, OPENAI_API_TYPE
, OPENAI_API_KEY
, OPENAI_API_BASE
, OPENAI_API_VERSION
, and OPENAI_PROXY
can be set as environment variables instead of passing them in the constructor. If behind a corporate proxy, the OPENAI_PROXY
environment variable can be set to the address of the proxy.
If this solution doesn't work or contradicts the experience, it is recommended to upgrade to the latest version of LangChain. If this is believed to be a bug that could impact other users, a pull request with the necessary changes can be made.
The suggested solution is: Try upgrading LangChain to the latest version, which is 0.0.218.
The suggested solution is:
OpenAI / AzureOpenAI based on openai official SDK. so, it will get the env configs. when you set
OPENAI_API_TYPE = azure
OPENAI_API_VERSION = 2023-03-15-preview
OPENAI_API_BASE = https://****.openai.azure.com/
OPENAI_API_KEY = ***
openai will try to get Azure configs, so we need "Must provide an 'engine' or 'deployment_id' parameter"
if you want to use both, you can try the following:
OpenAI(openai_api_key="sk...",model_name=".."),
AzureOpenai(openai_api_key="...",model_name="..",openai_api_base="azure")
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
Hi, @namgizlat! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
Based on my understanding, you are experiencing an "InvalidRequestError: Resource not found" when running the pandas_dataframe_agent
over AzureOpenAI. There haven't been any discussions on this issue, but there are related issues that might provide helpful solutions. One suggested solution is to create a custom class that inherits from AzureOpenAI
and overrides the necessary methods to support the deployment_id
parameter. Another suggested solution is to check the openai_api_base
URL and ensure it is correctly formatted and points to the correct Azure resource.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain community!
does this issue go solved I have the same error
System Info
langchain version: 0.0.216
Who can help?
No response
Information
Related Components
Reproduction
from langchain.agents import create_csv_agent from langchain.agents import create_pandas_dataframe_agent from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.chat_models import AzureChatOpenAI from langchain.llms import AzureOpenAI import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" os.environ["OPENAI_API_BASE"] = "https://####.openai.azure.com" os.environ["OPENAI_API_KEY"] = "#####" from langchain.llms import OpenAI import pandas as pd df = pd.read_csv("maccabi.csv") agent = create_pandas_dataframe_agent(AzureOpenAI(temperature=0), df, verbose=True) agent.run("how many rows are there?")
getting following error:
InvalidRequestError Traceback (most recent call last) Cell In[4], line 2 1 #agent.run("how many players's job is scorer and what is their name") ----> 2 agent.run("how many rows are there?")
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs) 288 if len(args) != 1: 289 raise ValueError("
run
supports only one positional argument.") --> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key] 292 if kwargs and not args: 293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:166, in Chain.call(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 168 final_outputs: Dict[str, Any] = self.prep_outputs( 169 inputs, outputs, return_only_outputs 170 )
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:160, in Chain.call(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 154 run_manager = callback_manager.on_chain_start( 155 dumpd(self), 156 inputs, 157 ) 158 try: 159 outputs = ( --> 160 self._call(inputs, run_manager=run_manager) 161 if new_arg_supported 162 else self._call(inputs) 163 ) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e)
File ~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:987, in AgentExecutor._call(self, inputs, run_manager) 985 # We now enter the agent loop (until it returns something). 986 while self._should_continue(iterations, time_elapsed): --> 987 next_step_output = self._take_next_step( 988 name_to_tool_map, 989 color_mapping, 990 inputs, 991 intermediate_steps, 992 run_manager=run_manager, 993 ) 994 if isinstance(next_step_output, AgentFinish): 995 return self._return( 996 next_step_output, intermediate_steps, run_manager=run_manager 997 )
File ~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:792, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 786 """Take a single step in the thought-action-observation loop. 787 788 Override this to take control of how the agent makes and acts on choices. 789 """ 790 try: 791 # Call the LLM to see what to do. --> 792 output = self.agent.plan( 793 intermediate_steps, 794 callbacks=run_manager.get_child() if run_manager else None, 795 **inputs, 796 ) 797 except OutputParserException as e: 798 if isinstance(self.handle_parsing_errors, bool):
File ~/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py:443, in Agent.plan(self, intermediate_steps, callbacks, kwargs) 431 """Given input, decided what to do. 432 433 Args: (...) 440 Action specifying what tool to use. 441 """ 442 full_inputs = self.get_full_inputs(intermediate_steps, kwargs) --> 443 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs) 444 return self.output_parser.parse(full_output)
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:252, in LLMChain.predict(self, callbacks, kwargs) 237 def predict(self, callbacks: Callbacks = None, kwargs: Any) -> str: 238 """Format prompt with kwargs and pass to LLM. 239 240 Args: (...) 250 completion = llm.predict(adjective="funny") 251 """ --> 252 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:166, in Chain.call(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 168 final_outputs: Dict[str, Any] = self.prep_outputs( 169 inputs, outputs, return_only_outputs 170 )
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py:160, in Chain.call(self, inputs, return_only_outputs, callbacks, tags, include_run_info) 154 run_manager = callback_manager.on_chain_start( 155 dumpd(self), 156 inputs, 157 ) 158 try: 159 outputs = ( --> 160 self._call(inputs, run_manager=run_manager) 161 if new_arg_supported 162 else self._call(inputs) 163 ) 164 except (KeyboardInterrupt, Exception) as e: 165 run_manager.on_chain_error(e)
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:92, in LLMChain._call(self, inputs, run_manager) 87 def _call( 88 self, 89 inputs: Dict[str, Any], 90 run_manager: Optional[CallbackManagerForChainRun] = None, 91 ) -> Dict[str, str]: ---> 92 response = self.generate([inputs], run_manager=run_manager) 93 return self.create_outputs(response)[0]
File ~/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py:102, in LLMChain.generate(self, input_list, run_manager) 100 """Generate LLM result from inputs.""" 101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) --> 102 return self.llm.generate_prompt( 103 prompts, 104 stop, 105 callbacks=run_manager.get_child() if run_manager else None, 106 **self.llm_kwargs, 107 )
File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:141, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, kwargs) 133 def generate_prompt( 134 self, 135 prompts: List[PromptValue], (...) 138 kwargs: Any, 139 ) -> LLMResult: 140 prompt_strings = [p.to_string() for p in prompts] --> 141 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:227, in BaseLLM.generate(self, prompts, stop, callbacks, tags, kwargs) 221 raise ValueError( 222 "Asked to cache, but no cache found at
langchain.cache
." 223 ) 224 run_managers = callback_manager.on_llm_start( 225 dumpd(self), prompts, invocation_params=params, options=options 226 ) --> 227 output = self._generate_helper( 228 prompts, stop, run_managers, bool(new_arg_supported), kwargs 229 ) 230 return output 231 if len(missing_prompts) > 0:File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:178, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 176 for run_manager in run_managers: 177 run_manager.on_llm_error(e) --> 178 raise e 179 flattened_outputs = output.flatten() 180 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:165, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, kwargs) 155 def _generate_helper( 156 self, 157 prompts: List[str], (...) 161 kwargs: Any, 162 ) -> LLMResult: 163 try: 164 output = ( --> 165 self._generate( 166 prompts, 167 stop=stop, 168 # TODO: support multiple run managers 169 run_manager=run_managers[0] if run_managers else None, 170 **kwargs, 171 ) 172 if new_arg_supported 173 else self._generate(prompts, stop=stop) 174 ) 175 except (KeyboardInterrupt, Exception) as e: 176 for run_manager in run_managers:
File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:336, in BaseOpenAI._generate(self, prompts, stop, run_manager, kwargs) 334 choices.extend(response["choices"]) 335 else: --> 336 response = completion_with_retry(self, prompt=_prompts, params) 337 choices.extend(response["choices"]) 338 if not self.streaming: 339 # Can't update token usage if streaming
File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:106, in completion_with_retry(llm, kwargs) 102 @retry_decorator 103 def _completion_with_retry(kwargs: Any) -> Any: 104 return llm.client.create(kwargs) --> 106 return _completion_with_retry(kwargs)
File ~/Library/Python/3.9/lib/python/site-packages/tenacity/init.py:289, in BaseRetrying.wraps..wrapped_f(*args, kw) 287 @functools.wraps(f) 288 def wrapped_f(*args: t.Any, *kw: t.Any) -> t.Any: --> 289 return self(f, args, kw)
File ~/Library/Python/3.9/lib/python/site-packages/tenacity/init.py:379, in Retrying.call(self, fn, *args, **kwargs) 377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 378 while True: --> 379 do = self.iter(retry_state=retry_state) 380 if isinstance(do, DoAttempt): 381 try:
File ~/Library/Python/3.9/lib/python/site-packages/tenacity/init.py:314, in BaseRetrying.iter(self, retry_state) 312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) 313 if not (is_explicit_retry or self.retry(retry_state)): --> 314 return fut.result() 316 if self.after is not None: 317 self.after(retry_state)
File /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py:438, in Future.result(self, timeout) 436 raise CancelledError() 437 elif self._state == FINISHED: --> 438 return self.__get_result() 440 self._condition.wait(timeout) 442 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py:390, in Future.__get_result(self) 388 if self._exception: 389 try: --> 390 raise self._exception 391 finally: 392 # Break a reference cycle with the exception in self._exception 393 self = None
File ~/Library/Python/3.9/lib/python/site-packages/tenacity/init.py:382, in Retrying.call(self, fn, *args, *kwargs) 380 if isinstance(do, DoAttempt): 381 try: --> 382 result = fn(args, **kwargs) 383 except BaseException: # noqa: B902 384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~/Library/Python/3.9/lib/python/site-packages/langchain/llms/openai.py:104, in completion_with_retry.._completion_with_retry(kwargs) 102 @retry_decorator 103 def _completion_with_retry(kwargs: Any) -> Any: --> 104 return llm.client.create(**kwargs)
File ~/Library/Python/3.9/lib/python/site-packages/openai/api_resources/completion.py:25, in Completion.create(cls, *args, *kwargs) 23 while True: 24 try: ---> 25 return super().create(args, **kwargs) 26 except TryAgain as e: 27 if timeout is not None and time.time() > start + timeout:
File ~/Library/Python/3.9/lib/python/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, params) 127 @classmethod 128 def create( 129 cls, (...) 136 params, 137 ): 138 ( 139 deployment_id, 140 engine, (...) 150 api_key, api_base, api_type, apiversion, organization, **params 151 ) --> 153 response, , api_key = requestor.request( 154 "post", 155 url, 156 params=params, 157 headers=headers, 158 stream=stream, 159 request_id=request_id, 160 request_timeout=request_timeout, 161 ) 163 if stream: 164 # must be an iterator 165 assert not isinstance(response, OpenAIResponse)
File ~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:230, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout) 209 def request( 210 self, 211 method, (...) 218 request_timeout: Optional[Union[float, Tuple[float, float]]] = None, 219 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]: 220 result = self.request_raw( 221 method.lower(), 222 url, (...) 228 request_timeout=request_timeout, 229 ) --> 230 resp, got_stream = self._interpret_response(result, stream) 231 return resp, got_stream, self.api_key
File ~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:624, in APIRequestor._interpret_response(self, result, stream) 616 return ( 617 self._interpret_response_line( 618 line, result.status_code, result.headers, stream=True 619 ) 620 for line in parse_stream(result.iter_lines()) 621 ), True 622 else: 623 return ( --> 624 self._interpret_response_line( 625 result.content.decode("utf-8"), 626 result.status_code, 627 result.headers, 628 stream=False, 629 ), 630 False, 631 )
File ~/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py:687, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream) 685 stream_error = stream and "error" in resp.data 686 if stream_error or not 200 <= rcode < 300: --> 687 raise self.handle_error_response( 688 rbody, rcode, resp.data, rheaders, stream_error=stream_error 689 ) 690 return resp InvalidRequestError: Resource not found
Expected behavior
getting correct response