Open eyurtsev opened 9 months ago
Will this be fixed ? There is no new activity so I'm not sure if an alternative exist for configurable_fields with agent.
Currently using langchain.version='0.2.1', I basically want to configure the prompt at runtime so I can update the promps uses by the model without re-deploying the app. Concretely during per_req_config, I get the prompt template from an api (with cache) and use a configurable field to pass it to the agent. It works with a chain like that:
prompt = ChatPromptTemplate.from_messages(messages).configurable_fields(
messages=ConfigurableField(
id="prompt_messages",
name="prompt messages",
)
)
chain= prompt | llm
async def per_req_config_modifier(config):
messages = get_prompt_from_api(prompt_name="chain")
config['configurable']['prompt_messages'] = messages
return config
add_routes(app, chain, path="/chain", disabled_endpoints=["playground"],
per_req_config_modifier=per_req_config_modifier)
But i doesn't work with an agent
with an agent created with:
prompt = ChatPromptTemplate.from_messages(messages).configurable_fields(
messages=ConfigurableField(
id="prompt_messages",
name="prompt messages",
)
)
agent = create_sql_agent(llm2,
db=None,
toolkit=toolkit,
agent_type="openai-tools",
extra_tools=[another_tool], # in addition to tools from db
max_iterations=15,
agent_executor_kwargs={'handle_parsing_errors':True,
},
prompt = prompt,
verbose=True)
it doesn't work.
the solution proposed in https://github.com/langchain-ai/langserve/blob/main/examples/configurable_agent_executor/server.py doesn't seems to work either for this.
Is there an alternative way to do this I didn't see ?, or maybe agent aren't relevant now and have equivalent in langgraph that works with config fields ?
I could make it work in a notebook, but the change I needed to make were quite complicated:
There is a first issue that is the prompt.partial called in create_sql_agent break the configurable field. an example of .partial calls is:
if "dialect" in prompt.input_variables:
prompt = prompt.partial(dialect=toolkit.dialect)
This first issue is easy to fix I just modify create_sql_agent to do after the part where it does .partial calls
prompt = prompt.configurable_fields(
messages=ConfigurableField(
id="prompt_messages",
name="prompt messages",
)
)
But then there is a second issue, which is that AgentExecutor and RunnableMultiActionAgent pass the callbacks field of the config but not the configurable field when making the chain calls.
For example if we do
async for chunk in agent.astream(input={"input":"an arbitrary question"}, config={"configurable":{'prompt_messages':new_messages}})
print(chunk)
astream
AgentExecutorIterator
object passing field from the config like callbacks, metadata... but not configurable. Then it iter through AgentExecutorIterator
AgentExecutorIterator.__aiter__
is called which will call self.agent_executor._aiter_next_step
again without the configurable field that it doesn't have.AgentExecutor._aiter_next_step
is called which calls self.agent.aplan
again without the config even so it pass the callbacksRunnableMultiActionAgent.aplan
is called, which calls self.runnable.astream
here runnable.astream
can take a config field but only config={"callbacks": callbacks}
is given. it would work by giving {"callbacks": callbacks, "configurable":configurable}
but we didn't pass the configurable field.So this second issue of configurable field not passed is more problematic to me. I can patched all those function to pass configurable field with minimal code but it seems complicated to maintain as they will probably change in te future again. And I have to patch many functions. For the agent.astream endpoint here I have to patch:
AgentExecutor.astream
AgentExecutorIterator.__aiter__
AgentExecutor._aiter_next_step
RunnableMultiActionAgent.aplan
but then for non async version I need to change:
AgentExecutor.stream
AgentExecutorIterator.__iter__
AgentExecutor._iter_next_step
RunnableMultiActionAgent.plan
and for invoke I have to additionally change:
AgentExecutor.invoke
AgentExecutor._calls
(+ async versions for ainvoke)
Users reasonably expect that when an agent is configurable, then the configuration information is properly propagated via the Agent Executor. However, this is not the case at the moment.
Bug is shown here: https://github.com/langchain-ai/langserve/pull/376
Needs to be fixed in langchain.
Original issue here: https://github.com/langchain-ai/langserve/issues/314