langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.25k stars 15.24k forks source link

provide visibility into final prompt #912

Closed wskish closed 10 months ago

wskish commented 1 year ago

For debugging or other traceability purposes it is sometimes useful to see the final prompt text as sent to the completion model.

It would be good to have a mechanism that logged or otherwise surfaced (e.g. for storing to a database) the final prompt text.

hwchase17 commented 1 year ago

this should be possible with tracing! have you tried it out? https://langchain.readthedocs.io/en/latest/tracing.html

solyarisoftware commented 1 year ago

I kept an eye on tracing debug system documentation, nevertheless a minimal requirement could be to just have some "very verbose" flag for llms and/or chains, to print out the LLM prompts (+ completions).

BTW, This is not an issue but a request.

Consider the following chunk:

llm = OpenAI(temperature=0)

template='''\
Please respond to the questions accurately and succinctly. \
If you are unable to obtain the necessary data after seeking help, \
indicate that you do not know.
'''

prompt = PromptTemplate(input_variables=[], template=template)

llm_weather_chain = LLMChain(llm=llm, prompt=prompt, verbose=True)

tools = [Weather, Datetime]

agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)

The output of the above program shows the agent behavior, in great colorized texts.

hwchase17 commented 1 year ago

there is a verbose flag you can pass into the llm! llm = OpenAI(temperature=0, verbose=True) should print it out

solyarisoftware commented 1 year ago

Thanks Harrison,

$ cat agent.py

#
# tools_agent.py
#
# zero-shot react agent that reply questions using available tools
# - Weater
# - Datetime
#
import sys

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain import LLMChain
from langchain.prompts import PromptTemplate

# import custom tools
from weather_tool import Weather
from datetime_tool import Datetime

llm = OpenAI(temperature=0, verbose=True)

template='''\
Please respond to the questions accurately and succinctly. \
If you are unable to obtain the necessary data after seeking help, \
indicate that you do not know.
'''

prompt = PromptTemplate(input_variables=[], template=template)

# Load the tool configs that are needed.
llm_weather_chain = LLMChain(
    llm=llm,
    prompt=prompt,
    verbose=True
)

tools = [
    Weather,
    Datetime
]

# Construct the react agent type.
agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    verbose=True
)

if __name__ == '__main__':
    if len(sys.argv) > 1:
        question = ' '.join(sys.argv[1:])
        print('question: ' + question)

        # run the agent
        agent.run(question)
    else:
        print('Agent answers questions using Weater and Datetime custom tools')
        print('usage: py tools_agent.py <question sentence>')
        print('example: py tools_agent.py what time is it?')

$ py agent.py "how is the weather today in Genova?"

question: how is the weather today in Genova?

> Entering new AgentExecutor chain...
 I need to get the weather forecast for Genova
Action: weather
Action Input: {"when":"today","where":"Genova"}
Observation: {"forecast": "sunny", "temperature": "20 degrees Celsius"}
Thought: I now know the weather forecast for Genova
Final Answer: The weather today in Genova is sunny with a temperature of 20 degrees Celsius.

> Finished chain.
hwchase17 commented 1 year ago

good catch - we need to fix this bug probably, but currently the wya to do it would actually be to set

agent.agent.llm_chain.verbose=True
solyarisoftware commented 1 year ago

Thanks. The workaround works, but yes I think it's a bug.

MacYang555 commented 1 year ago

I am studying the project and wanted to do some contributions, fixing some bugs/issues might be a good start, so I read through this issue and related code, I think the issue happens because there are actually two chains:

initialize_agent(..., verbose=True) just makes the outer chain verbose, and the workaround agent.agent.llm_chain.verbose=True is for the inner chain

With the above analysis, I think there might be 2 ways to fix the issue:

  1. when initialize_agent is called with verbose=True, we also set the inner chain verbose, this makes the API simple and easy to understand, the problem is that the API's output might be over verbose
  2. call initialize_agent like initialize_agent(..., agent_kwargs={"verbose": True}, verbose=True), the agent_kwargs is for the inner chain, we can use it to control the inner chain's output, this way, we can have precise control for the output, but the API might be a little bit confusing

@hwchase17 do you have some suggestions?

solyarisoftware commented 1 year ago

Thanks! I'd add a note on the functional meaning of "verbose"

so, i see the LLM verbosity as something different (at a lower level) from the agent verbosity. Does it make sense?

MacYang555 commented 1 year ago

maybe adding a verbose_llm parameter to initialize_agent will make the API easy to understand

solyarisoftware commented 1 year ago

Well, it could be a way, but currently, when you set verbose==True I see these different cases:

So llm, chain, agent already have their own distinct verbose flag with possible different meaning. I'd prefer to leave the same flag name "verbose". What is not fully clear is what means verbose for each of llm, chain, agent.

maxbaluev commented 1 year ago

Yep, now impossible to see final executed promst :(

alberduris commented 1 year ago

Is there any update on this? I think it is critical to be able to see the final prompt sent to the LLMs. Currently working with Langchain is too obscure, it makes it really difficult to build complex chains without making mistakes.

medram commented 1 year ago

having the same issue, I need to see the final prompt too.

forin87 commented 1 year ago

It looks like one possible workaround the get the final prompt is to attach a StdOutCallbackHandler to the chain

handler = StdOutCallbackHandler()
chain.run(... , callbacks=[handler])
rogerbock commented 1 year ago

Setting agent.agent.llm_chain.verbose=True worked for me with the latest version (langchain==0.0.216). I agree that expected behavior is that setting verbose=True on the LLM would do this.

Im-Himanshu commented 1 year ago

Modifying langchain.debug=True will print every prompt agent is executing with all the details possible. Use this code:

import langchain
langchain.debug=True
response = agent.run(prompt)
langchain.debug=False

Output of this may not be as pretty as verbose. I think verbose is designed to be on higher level for individual queries but for debugging and granular control debug is more useful.

utbdankar commented 1 year ago

Using langchain (0.0.256)

Building on forin87. Bellow logs the message to the console:

import langchain
langchain.verbose=True

If you want the prompt as a variable, I'd suggest using callbacks:

from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.callbacks.base import BaseCallbackHandler

class MyCustomHandler(BaseCallbackHandler):
    def on_chain_start(self, serialized, inputs, **kwargs):
        // parse serialized and save to db

handler = MyCustomHandler()
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")

chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
chain.run(number=2)

Hopefully this helps!

mikelam14 commented 1 year ago

Is there an update to this? On top of final prompt, i believe the final response coming from OpenAI would be helpful, things like prompt token count, completion token, stop reason, etc.

dosubot[bot] commented 11 months ago

Hi, @wskish

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you raised requests a mechanism to provide visibility into the final prompt text sent to the completion model for debugging and traceability purposes. The comments discuss various workarounds and potential solutions, including setting the verbose flag for the LLM and agent instances, using callback handlers, and modifying the langchain debug setting. There is also a suggestion to add a verbose_llm parameter to initialize_agent for better control over the output. The issue has garnered significant interest and support from the community, with multiple users expressing the need for this feature.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!

cryoff commented 9 months ago

Would be interesting to see if there are updates on this issue

nathanjones4323 commented 8 months ago

If anyone is looking for a simple string output of a single prompt, you can use the .format() method of ChatPromptTemplate, but should work with any BaseChatPromptTemplate class.

I struggled to find this as well. In my case I wanted the final formatted prompt string being used inside of the API call.

Example usage:

# Define a partial variable for the chatbot to use
my_partial_variable = """APPLE SAUCE"""

# Initialize your chat template with partial variables
prompt_messages = [
    # System message
    SystemMessage(content=("""You are a hungry, hungry bot""")),
    # Instructions for the chatbot to set context and actions
    HumanMessagePromptTemplate(
        prompt=PromptTemplate(
            template="""Your life goal is to search for some {conversation_topic}. If you encounter food in the conversation below, please eat it:\n###\n{conversation}\n###\nHere is the food: {my_partial_variable}""",
            input_variables=["conversation_topic", "conversation"],
            partial_variables={"my_partial_variable": my_partial_variable},
        )
    ),
    # Placeholder for additional agent notes
    MessagesPlaceholder("agent_scratchpad"),
]

prompt = ChatPromptTemplate(messages=prompt_messages)
prompt_as_string = prompt.format(
    conversation_topic="Delicious food",
    conversation="Nothing about food to see here",
    agent_scratchpad=[],
)
print(prompt_as_string)
System: You are a hungry, hungry bot
Human: Your life goal is to search for some Delicious food. If you encounter food in the conversation below, please eat it:
###
Nothing about food to see here
###
Here is the food: APPLE SAUCE
cryoff commented 8 months ago

I ended up using callbacks (like StdOut / self-implemented loguru-based / langfuse / arize-phoenix / mlflow / wandb)

krishna-praveen commented 8 months ago

Wait, what is the final solution for this though? I can't wrap my head around making things complex for something that should have been basic.

cryoff commented 7 months ago

@krishna-praveen for me it is usage of community-provided / self-implemented langchain callback mechanism

pabloesdev1 commented 7 months ago
chain.prompt.format_prompt(**input)
TimofeyBiryukov commented 2 months ago

Yeah, trying to find a solution for this as well, on the JS version. I have an agent with tools, and I'd like to see a final prompt that ChatGPT will receive. The chain.prompt.format({...inputs}); will only print the version of the prompt that my code provides, that's clearly not the entirety of it.

peterbraden commented 1 month ago

The chaining concept is a little difficult to get visibility into - I solved with the following:

def debug_lambda(x):
    print(x)
    return x

debug_runnable = RunnableLambda(debug_lambda)

This means you can simply pipe through your debug runnable to see the prompt that goes through:

ie.

                      | self.prompt
                      | debug_runnable
                      | self.model

Investigating a final prompt is such a common thing that it seems like there should be a nicer way of doing this.