langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.95k stars 15.38k forks source link

Prompt Template Injection??? #25132

Closed berkaybgk closed 3 months ago

berkaybgk commented 3 months ago

Checked other resources

Example Code

""" You will be asked a question about a dataframe and you will determine the necessary function that should be run to give a response to the question. You won't answer the question, you will only state the name of a function or respond with NONE as I will explain you later. I will explain you a few functions which you can use if the user asks you to analyze the data. The methods will provide you the necessary analysis and prediction. You must state the method's name and the required parameters to use it. Each method has a dataframe as it's first parameter which will be given to you so you can just state DF for that parameter. Also, if the user's question doesn't state specific metrics, you can pass ['ALL'] as the list of metrics. Your answer must only contain the function name with the parameters.

The first method is: create_prophet_predictions(df, metrics_for_forecasting, periods=28), It takes 3 arguments, 1st is a dataframe, 2nd is a list of metrics which we want to get the forecast for, and the 3rd, an optional period argument that represents the number of days which we want to extend our dataframe for. It returns an extended version of the provided initial dataframe by adding the future prediction results. It returns the initial dataframe without making additions if it fails to forecast so there is no error raised in any case. You will use this method if the user wishes to learn about the state of his campaigns' future. The user doesn't have to state a period, you can just choose 2 weeks or a month to demonstrate.

The second method is: calculate_statistics(df, metrics), It takes 2 arguments, 1st is a dataframe, and 2nd is a list of metrics. It returns a dictionary of different statistics for each metric provided in the 2nd parameter. The returned dictionary looks like this: {'metric': [], 'mean': [], 'median': [], 'std_dev': [], 'variance': [], 'skewness': [], 'kurtosis': [], 'min': [], 'max': [], '25th_percentile': [], '75th_percentile': [], 'trend_slope': [], 'trend_intercept': [], 'r_value': [], 'p_value': [], 'std_err': []} If any of the keys of this dictionary is asked in the question, this method should be used. Also, if the user asks an overall analysis of his campaigns, this method should be used with metrics parameter of the function as ['ALL'] to comment on specific metrics. These statistics provide a comprehensive overview of the central tendency, dispersion, distribution shape, and trend characteristics of the data, as well as the relationship between variables in regression analysis and some simple statistics like mean, min and max can help you answer questions.

The third method is: feature_importance_analysis(df, target_column, size_column, feature_columns, is_regression=True), It takes 5 parameters. 1st parameter is the dataframe, 2nd parameter is the column name of the target variable, 3rd parameter is the name of the column which contains the size of our target column, and it is used to adjust the dataframe, 4th parameter is the feature_columns list and it should be the list of features which we want to analyze the importance of, and the 5th parameter is the boolean value representing if our model is a regression model or classification model (True = regression, False = classification) It uses machine learning algorithms and calculates feature importance of some features provided by you. It also gives information about our audience size and target size. And lastly it gives the single and combined shap values of the given features to determine the contributions of each of them to the feature importance analysis. If the question contains the phrases "audience size" or "target size" or "importance" or if the user wants to know why do the model thinks that some features will impact our results more significantly, it is a very high chance that you will use this function.


analysis_examples = [
        {
            "question": "Can you analyze my top performing 10 Google Ads campaigns in terms of CTR?",
            "answer": "calculate_statistics(DF, ['ALL'])"
        },
        {
            "question": "Can you give me the projection of my campaign's cost and cpm results for the next week?",
            "answer": "create_prophet_predictions(DF, ['cost', 'cpm'], 7)"
        },
        {
            "question": "Which metric in my last google ads campaign serves a key role?",
            "answer": "feature_importance_analysis(DF, 'revenue', 'cost', ['ctr', 'roas', 'cpc', 'clicks', 'impressions'], True)"
        },
        {
            "question": "Can you give me the projection of my campaign's cost and cpm results for the next week?",
            "answer": "create_prophet_predictions(DF, ['cost', 'cpm'], 7)"
        },
        {
            "question": "What is the mean of the cost values of my top performing 10 campaigns based on ROAS values?",
            "answer": "calculate_statistics(DF, ['cost'])"
        },
    ]

    analysis_example_prompt = ChatPromptTemplate.from_messages(
        [
            ("human", "{question}"),
            ("ai", "{answer}"),
        ]
    )

    analysis_few_shot_prompt = FewShotChatMessagePromptTemplate(
        example_prompt=analysis_example_prompt,
        examples=analysis_examples,
    )

    with open("/analysis/guidance_statistics_funcs.txt", "r") as f:
        guidance_text = f.read()

    analysis_final_prompt = ChatPromptTemplate.from_messages(
        [
            ("system", guidance_text),
            analysis_few_shot_prompt,
            ("human", "{input}"),
        ]
    )

    analysis_chain = analysis_final_prompt | ChatOpenAI(model="gpt-4o-mini", temperature=0) | StrOutputParser()

    response = analysis_chain.invoke({"input": analysis_sentence})

Error Message and Stack Trace (if applicable)

ErrorMessage: 'Input to ChatPromptTemplate is missing variables {"\'metric\'"}. Expected: ["\'metric\'", \'input\'] Received: [\'input\']'

I couldn't provide the whole stack trace since I run it on a web app. But the exception is raised in the invoke process.

Description

from langchain_core.prompts import ChatPromptTemplate

The error is caused by my prompt, specifically the guidance text which I passed as the "system" message to the ChatPromptTemplate. I described a dictionary structure to the LLM which a function I will use returns, but the curly braces I provided somehow caused an injection-like problem, causing my chain to expect more inputs than I provided. When I deleted the first key of the dictionary in my prompt, this time it expected the 2nd key of the dictionary as an input to the chain. And once I deleted the curly braces in my system prompt, the issue resolved. So I am certain that this problem is caused by the ChatPromptTemplate object.

System Info

System Information

OS: Darwin OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:19:05 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8112 Python Version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)]

Package Information

langchain_core: 0.2.28 langchain: 0.2.10 langchain_community: 0.2.7 langsmith: 0.1.82 langchain_chroma: 0.1.1 langchain_experimental: 0.0.62 langchain_openai: 0.1.20 langchain_text_splitters: 0.2.1 langchainhub: 0.1.20 langgraph: 0.1.5

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langserve

berkaybgk commented 3 months ago

Also, although I am not sure, this can be used in a malicious way if the user input contains elements like curly braces or so

eyurtsev commented 3 months ago

Use the SystemMessage message directly if you do not want interpolation. By default the chat prompt template allows interpolation (e.g., your template already uses this feature for human message).

from langchain_core.messages import SystemMessage

    analysis_final_prompt = ChatPromptTemplate.from_messages(
        [
            SystemMessage(content=guidance_text), 
            analysis_few_shot_prompt,
            ("human", "{input}"), # <-- contains curly braces
        ]
    )