[X] I added a very descriptive title to this issue.
[X] I searched the LangChain documentation with the integrated search.
[X] I used the GitHub search to find a similar question and didn't find it.
[X] I am sure that this is a bug in LangChain rather than my code.
[X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
from langchain_openai import AzureChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda, RunnableSequence
from langchain_core.messages import HumanMessage
def langchain_model(prompt_func: callable) -> RunnableSequence:
model = AzureChatOpenAI(azure_deployment="gpt-35-16k", temperature=0)
return RunnableLambda(prompt_func) | model | StrOutputParser()
def prompt_func(
_dict: dict,
) -> list:
question = _dict.get("question")
texts = _dict.get("texts")
text_message = {
"type": "text",
"text": (
"You are a classification system for Procurement Documents. Answer the question solely on the provided Reference texts.\n"
"If you cant find a answer reply exactly like this: 'Sorry i dont have an answer for youre question'\n"
"Return the answer as a string in the language the question is written in.\n\n "
f"User-provided question: \n"
f"{question} \n\n"
"Reference texts:\n"
f"{texts}"
),
}
return [HumanMessage(content=[text_message])]
model = langchain_model(prompt_func=prompt_func)
for steps, runnable in model:
try:
print(runnable[0].dict())
except:
print(runnable)
Error Message and Stack Trace (if applicable)
for AzureChatOpenAI as a component you will just get this output as dict:
This is not enough if you want to log the model with MLFlow, the deployment_name is definitely needed and needs to be gotten from dict. See related issue: https://github.com/mlflow/mlflow/issues/11439. Expected output would have more details at least a deployment name.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
for AzureChatOpenAI as a component you will just get this output as dict:
{'model': 'gpt-3.5-turbo', 'stream': False, 'n': 1, 'temperature': 0.0, '_type': 'azure-openai-chat'}.
Description
This is not enough if you want to log the model with MLFlow, the deployment_name is definitely needed and needs to be gotten from dict. See related issue: https://github.com/mlflow/mlflow/issues/11439. Expected output would have more details at least a deployment name.
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found: