langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
88.47k stars 13.89k forks source link

Misleading logs #23239

Open 954-Ivory opened 1 week ago

954-Ivory commented 1 week ago

Checked other resources

Example Code

import os

from dotenv import load_dotenv
from langchain_core.globals import set_debug
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

set_debug(True)
load_dotenv()
model = ChatOpenAI(
    api_key=os.getenv('OPENAI_API_KEY'),
    base_url=os.getenv('OPENAI_BASE_URL'),
    model="gpt-3.5-turbo"
)

messages = [
    SystemMessage(content="Translate the following from English into Italian"),
    HumanMessage(content="hi!"),
]

if __name__ == "__main__":
    print(model.invoke(messages))

Error Message and Stack Trace (if applicable)

[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: Translate the following from English into Italian\nHuman: hi!"
  ]
}

Description

Shouldn't it be like this?

[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System": "Translate the following from English into Italian",
    "Human": "hi!"
  ]
}

System Info

I have tried it in two conda envs:


langchain 0.2.5 windows 11 python Python 3.11.9


langchain 0.1.10 windows 11 python Python 3.11.7

wenngong commented 1 week ago

The terminal output u see is just intermediate logs by ***CallBackHandler,and the handler function flatten the chat prompt messages into single string for logging.

so, it's not a bug I believe.

image

954-Ivory commented 1 week ago

The terminal output u see is just intermediate logs by ***CallBackHandler,and the handler function flatten the chat prompt messages into single string for logging.

so, it's not a bug I believe.

image

Yes, I have checked the network request and there is no problem. But such output will cause misunderstandings.