langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.46k stars 15.27k forks source link

Getting invalid input variable in prompt template using load_evaluator #8229

Closed sagardspeed2 closed 1 year ago

sagardspeed2 commented 1 year ago

System Info

platform = mac m2 python = 3.11

Who can help?

@hwchase17

Information

Related Components

Reproduction

prompt_template = PromptTemplate.from_template(
            """Given the input context, which is most similar to the reference label: A or B?
Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.

DATA
----
input: {input}
reference: {source}
A: {prediction}
B: {prediction_b}
---
Reasoning:

"""
        )
evaluator = load_evaluator(
    "pairwise_string", prompt=prompt_template, requires_reference=True
)

print(evaluator.prompt)

this is mentioned on the langchain documentation https://python.langchain.com/docs/guides/evaluation/comparison/pairwise_string

but when I execute this, I am getting the error related promptTemplate input variables

{
    "message": "Input variables should be {'prediction_b', 'input', 'prediction'}, but got ['input', 'prediction', 'prediction_b', 'source']",
    "result": null,
    "success": false
}

Expected behavior

it should not give the error and give the correct response in dict format, like this,

{'reasoning': "Option A is most similar to the reference label. Both the reference label and option A state that the dog's name is Fido. Option B, on the other hand, gives a different name for the dog. Therefore, option A is the most similar to the reference label. \n",
     'value': 'A',
     'score': 1}
dosubot[bot] commented 1 year ago

Answer generated by a 🤖

Answer

I understand that you're encountering an error related to the input variables in your prompt template when using the load_evaluator function in the LangChain framework. The error message indicates a mismatch between the expected and provided input variables.

The issue seems to stem from the variables you're using in your prompt_template. You're currently using four variables: input, source, prediction, and prediction_b. However, the "pairwise_string" evaluator in the load_evaluator function expects only three variables: input, prediction, and prediction_b.

To resolve this, you should replace the source variable with reference in your prompt_template. Here's how you can modify your code:

prompt_template = PromptTemplate.from_template(
            """Given the input context, which is most similar to the reference label: A or B?
Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.

DATA
----
input: {input}
reference: {reference}
A: {prediction}
B: {prediction_b}
---
Reasoning:

"""
        )
evaluator = load_evaluator(
    "pairwise_string", prompt=prompt_template, requires_reference=True
)

print(evaluator.prompt)

This should resolve the error as the input variables now align with what the "pairwise_string" evaluator expects.

While it's possible to disable the validation of the template string by setting validate_template to False when creating the PromptTemplate, I wouldn't recommend this approach as it could lead to unexpected behavior if the template variables don't match the expected input variables.

If this solution works for you and you believe this issue could impact other users, you're welcome to make a pull request to improve the documentation or the error messaging.

If you're still encountering issues after trying this solution, it might be worth checking if you're using the latest version of LangChain, as the issue might have been addressed in a more recent update.

Sources


This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

dosubot[bot] commented 1 year ago

Hi, @sagardspeed2! I'm Dosu, and I'm helping the LangChain team manage our backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you were experiencing an error when using the "load_evaluator" function in the LangChain library. It seems that the error was related to the input variables of the prompt template. I suggested that the issue might be caused by a mismatch between the expected and provided input variables in the template and provided a solution to resolve the error. However, both you and I reacted with thumbs down to the suggested solution.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and cooperation!