langchain-ai / langchain

๐Ÿฆœ๐Ÿ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.57k stars 15.3k forks source link

agent executor not using custom openai_api_base #14470

Closed ethankershner closed 7 months ago

ethankershner commented 11 months ago

System Info

OS: Using docker image amd64/python:3.10-slim Python Version: 3.10.13 Langchain Version: 0.0.336 OpenAI Version: 0.27.7 Tenacity Version: 4.65.0

Who can help?

@agola11 @hwchase17

Information

Related Components

Reproduction

When I try to use an llm with a custom openai_api_base argument within an agent, the agent appears to be attempting to access the OpenAI API endpoint instead of the custom one I have specified.

Running:

    llm = ChatOpenAI(default_headers={"api-key":"**REDACTED**",
                     openai_api_base="**REDACTED**",
                     openai_api_key="none").bind(stop=["\nObservation"])

    tools =  []

    tools.append(Tool.from_function(func=self.get_scores,
                                 name="get_scores",
                                 description="function to get scores"))

    prompt = PromptTemplate.from_template("""Answer the following questions as best you can. You have access to the following tools:
                                                              {tools}

                                                            Use the following format:

                                                            Question: the input question you must answer
                                                            Thought: you should always think about what to do
                                                            Action: the action to take, should be one of [{tool_names}]
                                                            Action Input: the input to the action
                                                            Observation: the result of the action
                                                            ... (this Thought/Action/Action Input/Observation can repeat N times)
                                                            Thought: I now know the final answer
                                                            Final Answer: the final answer to the original input question

                                                            Begin!

                                                            Question: {input}
                                                            Thought:{agent_scratchpad}""")

    prompt = prompt.partial(tools=render_text_description(tools),
                            tool_names=", ".join([t.name for t in tools]),
                            )

    agent = (
    {
    "input": lambda x: x["input"],
    "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
    }
    | prompt
    | llm
    | ReActSingleInputOutputParser()
    )

    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

    output = agent_executor.invoke({"input":"foo"})

yields:

File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 87, in invoke return self( File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__ raise e File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__ self._call(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _call next_step_output = self._take_next_step( File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step output = self.agent.plan( File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 385, in plan output = self.runnable.invoke(inputs, config={"callbacks": callbacks}) File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 1427, in invoke input = step.invoke( File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 2787, in invoke return self.bound.invoke( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 142, in invoke self.generate_prompt( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 349, in generate raise e File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 339, in generate self._generate_with_cache( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache return self._generate( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 422, in _generate response = self.completion_with_retry( File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 352, in completion_with_retry return _completion_with_retry(**kwargs) File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter raise retry_exc.reraise() File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise raise self.last_attempt.result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 350, in _completion_with_retry return self.client.create(**kwargs) File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 337, in handle_error_response raise error.APIError( openai.error.APIError: Invalid response object from API: '{\n "detail": "No authorization token provided",\n "status": 401,\n "title": "Unauthorized",\n "type": "about:blank"\n}\n' (HTTP response code was 401)

When I change the openai_api_base to something nonsensical, the same error is returned, making me think that it is using OpenAI's API base and not the custom one specified.

Expected behavior

I would expect the agent to work as shown here: https://python.langchain.com/docs/modules/agents/agent_types/react

dosubot[bot] commented 11 months ago

๐Ÿค–

Hello @ethankershner! :wave: I'm Dosu, a friendly bot here to assist while we're waiting for human maintainers. I'm designed to help troubleshoot bugs, answer questions, and guide you to be an even better contributor. Looking forward to working together on your issue!

Based on the information you've provided, it seems like the openai_api_base parameter is not being correctly set when initializing the ChatOpenAI object. This could be due to a couple of reasons:

  1. The openai_api_base parameter is not being correctly passed to the ChatOpenAI object. In your code, it seems like there might be a missing comma after the "api-key":"**REDACTED**" line. This could be causing a syntax error which prevents the openai_api_base parameter from being correctly set. Here's the corrected code:
llm = ChatOpenAI(default_headers={"api-key":"**REDACTED**"},
                 openai_api_base="**REDACTED**",
                 openai_api_key="none").bind(stop=["\nObservation"])
  1. The openai_api_base parameter is being overridden somewhere else in your code. The LangChain agent first checks the values passed to the LangChain configuration (values dictionary), and if not found, it then checks the environment variables. If the openai_api_base parameter is being set to a different value somewhere else in your code or environment, it could be overriding the value you're trying to set.

If neither of these solutions work, it would be helpful to see more of your code to better understand what might be going wrong. Specifically, it would be useful to see how you're setting up your environment and initializing the LangChain agent.

For more information on how the openai_api_base parameter is used in LangChain, you can refer to the LangChain source code.

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ethankershner commented 11 months ago

@dosu-bot

I updated the issue with more information. Can you take another look?

dosubot[bot] commented 11 months ago

๐Ÿค–

I'm unable to directly access the LangChain repository to view the updated issue. However, if you can provide me with the additional information here, I'd be more than happy to assist you further.


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ethankershner commented 11 months ago

@dosu-bot

Sure, I specified additional versions of Python packages I am using that could help contribute to the error:

OpenAI Version: 0.27.7 Tenacity Version: 4.65.0

dosubot[bot] commented 11 months ago

๐Ÿค–

Based on the information you've provided and the context from the LangChain repository, it seems like the issue you're experiencing might be due to the fact that the openai_api_base value is not being used in the completion_with_retry and acompletion_with_retry functions. These functions are responsible for making the actual API calls to OpenAI.

In the provided context, the openai_api_base value is only used in the validate_environment function to set up the client parameters. However, these parameters are not used in the completion_with_retry and acompletion_with_retry functions.

To resolve this issue, you need to modify the completion_with_retry and acompletion_with_retry functions to use the openai_api_base value when making the API calls. Here's how you can do it:

def completion_with_retry(
    llm: Union[BaseOpenAI, OpenAIChat],
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    **kwargs: Any,
) -> Any:
    """Use tenacity to retry the completion call."""
    if is_openai_v1():
        return llm.client.create(**kwargs)

    retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)

    @retry_decorator
    def _completion_with_retry(**kwargs: Any) -> Any:
        return llm.client.create(base_url=llm.openai_api_base, **kwargs)

    return _completion_with_retry(**kwargs)

async def acompletion_with_retry(
    llm: Union[BaseOpenAI, OpenAIChat],
    run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
    **kwargs: Any,
) -> Any:
    """Use tenacity to retry the async completion call."""
    if is_openai_v1():
        return await llm.async_client.create(**kwargs)

    retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)

    @retry_decorator
    async def _completion_with_retry(**kwargs: Any) -> Any:
        # Use OpenAI's async api https://github.com/openai/openai-python#async-api
        return await llm.client.acreate(base_url=llm.openai_api_base, **kwargs)

    return await _completion_with_retry(**kwargs)

Please note that this is a hypothetical solution and the actual implementation might differ based on the complete codebase and the OpenAI Python client version you're using. You can find more details about this in the LangChain repository.

As for the openai_api_base parameter, it is used in the ChatOpenAI class in LangChain to specify the base URL path for API requests. This parameter is useful when you are using a proxy or a service emulator for your API requests. If you are not using a proxy or service emulator, you can leave this parameter blank. You can find more details about this in the LangChain repository.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ethankershner commented 11 months ago

@dosu-bot

Why is it necessary to pass this in as a separate parameter? Wouldn't this be taken care of with the **kwargs parameter that already exists?

dosubot[bot] commented 11 months ago

๐Ÿค–

Based on the information you've provided and the similar issues I found in the LangChain repository, it seems like the openai_api_base parameter is not being used as expected in the ChatOpenAI class.

The openai_api_base parameter is stored as an attribute of the ChatOpenAI object and is included in the lc_attributes property, but it is not used anywhere else in the class. This means that even though you're passing a custom OpenAI API base, it's not being used when making API requests.

To use a custom OpenAI API base, you would need to modify the ChatOpenAI class to use the openai_api_base parameter when creating the OpenAI API client. This could be done in the __init__ method of the class or in a separate method that is called when the client is created. Here is an example of how you could modify the ChatOpenAI class to use the openai_api_base parameter:

import openai

class ChatOpenAI(BaseChatModel):
    # ...

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        if self.openai_api_base is not None:
            openai.api_base = self.openai_api_base

This code will set the base URL for the OpenAI API to the value of the openai_api_base parameter when a ChatOpenAI object is created. Note that this will affect all subsequent API calls made using the openai package, not just those made using the ChatOpenAI object. You can find more details about this in the LangChain codebase.

Regarding your question about the kwargs parameter, it's a common Python convention to use kwargs as a catch-all for additional keyword arguments that are not explicitly listed in the function or method signature. In the case of the ChatOpenAI class, the **kwargs parameter is used to capture any additional parameters that are not listed in the __init__ method signature. However, it's not used to set the OpenAI API base.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ethankershner commented 11 months ago

@dosu-bot

I don't think this is it. There is logic built in the ChatOpenAI class to handle custom base URLs, plus as I mentioned I am able to successfully use this object when I am using a chain outside of an agent. Do you see anything in langchain/agents/agent.py that could lead to overriding of user-supplier parameters for a runnable like ChatOpenAI added to an agent's chain?

dosubot[bot] commented 11 months ago

๐Ÿค–

Based on the information you've provided and the context from the LangChain repository, it seems like the issue might be related to changes made in the ChatOpenAI class between LangChain versions 0.0.200 and 0.0.267. The methods in the ChatOpenAI class have been made asynchronous in the newer versions. This means that the methods _generate and _stream have been changed to _agenerate and _astream respectively. Also, the completion_with_retry method call has been changed to acompletion_with_retry and is now awaited. This indicates that this method is now asynchronous. You might need to adjust your code to handle these asynchronous methods properly.

As for the "Invalid URL" error, it could be due to a change in the way the AzureMLEndpointClient is initialized or how the endpoint URL is being passed to it. You should ensure that the endpoint URL is correctly set in either the AzureMLOnlineEndpoint constructor or the AZUREML_ENDPOINT_URL environment variable. The URL should be a valid Azure ML endpoint URL. You can refer to the issue Getting invalid URL post after updating langchain from 0.0.200 to 0.0.267 for more details.

Regarding the openai_api_base parameter, it is used to set the base URL path for API requests. This is particularly useful when you are using a proxy or a service emulator. If you leave it blank, it means you are not using a proxy or service emulator. The value for openai_api_base can be provided directly when initializing the ChatOpenAI class, or it can be automatically inferred from the environment variable OPENAI_API_BASE if not provided. You can refer to the libs/langchain/langchain/llms/openai.py file for more details.

If you're still facing the issue, could you please provide more details about the custom openai_api_base you're using and how you're setting it? Also, could you please share the full stack trace of the error you're getting? This will help in understanding the issue better.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ethankershner commented 11 months ago

Update: This issue did not occur when using OpenAI version 1.3.6. Looking into potential steps to preserve backwards-compatibility.