Closed aju22 closed 1 year ago
Answer generated by a 🤖
I understand that you're experiencing an issue where the final answer provided by the LangChain MRKL Agent is not as detailed as the observation. It seems like the model is assuming it has already provided a sufficient answer, and as a result, the final answer lacks the necessary detail.
One potential solution to this issue could be to adjust the Thought
before the Final Answer
in the prompt template. Instead of the model thinking "I now know the final answer", you could change this to something like "I have gathered detailed information to answer the question". This might encourage the model to provide a more comprehensive final answer.
Here's how you can modify the prompt template:
FORMAT_INSTRUCTIONS = """Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I have gathered detailed information to answer the question
Final Answer: the final answer to the original input question"""
Please note that this is a heuristic approach and might not always lead to the desired results. The effectiveness of this approach would depend on the specific behavior of the language model being used.
You also have the option to create a custom LLMChain and feed agents a self-defined prompt template, not restricted to the prompt generated by the create_prompt
function, as long as it meets the agent's requirements. For ZeroShotAgent
, you need to ensure that there is a string starting with "Action:" and a following string starting with "Action Input:", and both should be separated by a newline.
Furthermore, you can override the create_prompt
method in a subclass to customize the prompt creation process. For example, you could modify the format_instructions
argument to change the instructions given to the model. This could potentially influence the model's behavior and lead to more detailed final answers.
Lastly, you can modify the default prefix, format instructions, and suffix used in the prompt in the prompt.py
file to further customize the behavior of your agent. For example, you could modify the FORMAT_INSTRUCTIONS
to change the instructions given to the model, or you could modify the PREFIX
to provide additional context to the model. This could potentially influence the model's behavior and lead to more detailed final answers.
Please remember that these are potential solutions and might not always work as expected. If you find a solution that works well, and you believe it could be beneficial to other users, feel free to make a pull request. If you're still experiencing issues, I would recommend checking if you're using the latest version of LangChain, as updates often include bug fixes and improvements.
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
@aju22 hey, I hope you have figured out how to solve this issue since I'm also dealing with it. I honestly didn't fully understand the solution that was given by the bot, so would you please share your solution?
@Khalidzo For my particular use case, I wanted the model to act sort of a like a teacher and give bullet point answers for easy understanding.
With a bit of hacks and "prompt engineering", I get fairly good answers and also worked out a way around most frustrating LLM Output Parse Errors, and it works 99% of the time.
Please check out the final implementation in my project at: DocumentGPT
You will find the agent declared in def get_agent(self)
P.S: It does consume a good amount of tokens to get it working.
Great job! @aju22 This can effectively extract information from the Observations.
Discussed in https://github.com/hwchase17/langchain/discussions/7423