langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
89.23k stars 14.06k forks source link

ConfigurableFields does not works for agent #23745

Open quadcube opened 2 weeks ago

quadcube commented 2 weeks ago

Checked other resources

Example Code

import langchain.agents import AgentExecutor
import langchain.agents as lc_agents

def fetch_config_from_header(config: Dict[str, Any], req: Request) -> Dict[str, Any]:
    config = config.copy()
    configurable = config.get("configurable", {})

    if "x-model-name" in req.headers:
        configurable["model_name"] = req.headers["x-model-name"]
    else:
        raise HTTPException(401, "No model name provided")

    if "x-api-key" in req.headers:
        configurable["default_headers"] = {
            "Content-Type":"application/json",
            "api-key": req.headers["x-api-key"]
        }
    else:
        raise HTTPException(401, "No API key provided")

    if "x-model-kwargs" in req.headers:
        configurable["model_kwargs"] = json.loads(req.headers["x-model-kwargs"])
    else:
        raise HTTPException(401, "No model arguments provided")

    configurable["openai_api_base"] = f"https://someendpoint.com/{req.headers['x-model-name']}"
    config["configurable"] = configurable
    return config

chat_model = ChatOpenAI(
    model_name = "some_model",
    model_kwargs = {},
    default_headers = {},
    openai_api_key = "placeholder",
    openai_api_base = "placeholder").configurable_fields(
        model_name = ConfigurableField(id="model_name"),
        model_kwargs = ConfigurableField(id="model_kwargs"),
        default_headers = ConfigurableField(id="default_headers"),
        openai_api_base = ConfigurableField(id="openai_api_base"),
    )

agent = lc_agents.tool_calling_agent.base.create_tool_calling_agent(chat_model, tools, prompt_template)
runnable = AgentExecutor(agent=agent, tools=tools)

add_routes(
    app,
    runnable.with_types(input_type=InputChat),
    path="/some_agent",
    per_req_config_modifier=fetch_config_from_header,
)

Error Message and Stack Trace (if applicable)

No response

Description

Ideally when we set a field to be configurable, it should be updated accordingly when new configurable values are given by per_req_config_modifier.

However, none of the configurable variables such as temperature, openai_api_base, default_headers, etc. are passed to the final client.

Some of the related values from certain functions

# returned value of config in fetch_config_from_header()
{'configurable': {'model_name': 'some_model', 'default_headers': {'Content-Type': 'application/json', 'api-key': 'some_api_key'}, 'model_kwargs': {'user': 'some_user'}, 'openai_api_base': 'https://someendpoint.com/some_model', 'temperature': 0.6}

# values of cast_to, opts in openai's _base_client.py AsyncAPIClient.post()
cast_to: <class 'openai.types.chat.chat_completion.ChatCompletion'>
opts: method='post' url='/chat/completions' params={} headers=NOT_GIVEN max_retries=NOT_GIVEN timeout=NOT_GIVEN files=None idempotency_key=None post_parser=NOT_GIVEN json_data={'messages': [{'content': 'some_content', 'role': 'system'}], 'model': 'default_model', 'n': 1, 'stream': False, 'temperature': 0.7} extra_json=None

System Info

langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-experimental==0.0.62
langchain-openai==0.1.13
langchain-text-splitters==0.2.2
langgraph==0.1.5
langserve==0.2.2
langsmith==0.1.82
openai==1.35.7

platform = linux
python version = 3.12.4
quadcube commented 2 weeks ago

Related issues

https://github.com/langchain-ai/langserve/issues/314

17555

quadcube commented 2 weeks ago

@spike-spiegel-21 here's the example to reproduce the issue for configurable agent. Thanks in advanced!

eyurtsev commented 2 weeks ago

Hi @quadcube, this is a known issue with the agent executor in langchain. We're migrating users to use the executor in langgraph which has been build to address a lot of the underlying issues with the executor in langchain.

quadcube commented 2 weeks ago

Hi @quadcube, this is a known issue with the agent executor in langchain. We're migrating users to use the executor in langgraph which has been build to address a lot of the underlying issues with the executor in langchain.

Thanks for the info! I will test with langgraph’s executor tomorrow morning and report back