Open Abd-elr4hman opened 5 days ago
I believe solving this issue might be useful to continue progress on the pr: https://github.com/langchain-ai/langchain/pull/25756
I opened a new issue for it because it did not seem to have anything with RunnableWithMessageHistory Implementation.
Checked other resources
Example Code
.
Error Message and Stack Trace (if applicable)
No response
Description
Examples of runnables with different signatures:
In the RunnableWithMessageHistory documentation there are three main examples of runnables with different signatures:
model = ChatOpenAI(api_key=OPENAI_API_KEY) prompt = ChatPromptTemplate.from_messages( [ ( "system", "You're an assistant who's good at {ability}. Respond in 20 words or fewer", ), MessagesPlaceholder(variable_name="history"), ("human", "{input}"), ] ) runnable = prompt | model
store = {}
def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id]
with_message_history = RunnableWithMessageHistory( runnable, get_session_history, input_messages_key="input", history_messages_key="history", )
Comparing input schema generated for each
If I run the following lines for each example:
I get the following outputs respectively:
True --> input schema is subclass of pydantic BaseModel False --> input schema is not a subclass of pydantic RootModel
Messages input, dict output True --> input schema is subclass of pydantic BaseModel False --> input schema is not a subclass of pydantic RootModel
Messages input, messages output
True --> input schema is a subclass of pydantic BaseModel True --> input schema is a subclass of pydantic RootModel
Inconsistencies
Even though both the second and the third examples expect Messages input... the underlying runnable generated a completely different schema type for each...
the schema for the second example contains a required field "root" which is only used with RootModel subclass, yet the field is defined on a BaseModel subclass which I believe is unintended, and is the result of wrapping the runnable --> ChatOpenAI() that has a messages input segnature in a RunnableParallel
I would expect the second and the third example underlying runnables:
and
to produce similar input schema when i call
I would love to get it clear if the current RunnableParallel get_input_schema behaviour is intended.
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies