Closed KeshavSingh29 closed 1 year ago
Yes, my thoughts are with yours.
I solved this issue.
conversational-react-description
than chat-conversational-react-description
conversational_agent.agent.llm_chain.prompt
to any custom prompt. Handles any language like a charm. Thanks to the dev. of langchain!
@KeshavSingh29 Could you elaborate on the fix on this issue? How did you update the prompt?
@sindre-bouvet
These two lines in @KeshavSingh29 's code updated the prompt for me:
prompt = conversational_agent.agent.create_prompt(system_message=sys_msg, tools=tools) conversational_agent.agent.llm_chain.prompt = prompt
if you have any input variables in your prompt, you can specify them in create_prompt(....input_variables=["chat_history"]) for example.
@sindre-bouvet Its exactly as @happinessbaby mentioned. If you are still unable to fix it, let me know.
Hi @KeshavSingh29 It gives error TypeError: ConversationalAgent.create_prompt() got an unexpected keyword argument 'system_message'
@siddhijain47 Can you share your code snippet here? Its difficult to understand where the bug is originating from otherwise.
@siddhijain47 Can you share your code snippet here? Its difficult to understand where the bug is originating from otherwise.
Sure @KeshavSingh29 Here is my code : app.custom_agent = initialize_agent(agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, llm=app.llm, tools=app.tool,
verbose=True, memory=app.memory, handle_parsing_errors=True)
print("print-----",app.custom_agent.agent.llm_chain.prompt.template)
app.prompt = app.custom_agent.agent.create_prompt(system_message=app.system_message.content, tools=app.tool,) app.custom_agent.agent.llm_chain.prompt = app.prompt
@siddhijain47
You are using CONVERSATIONAL_REACT_DESCRIPTION
type agent.
This agent does not have system_message
argument in create_prompt
function signature.
For this agent you only need to pass only prefix
and tools
argument to create_prompt.
where prefix is analogous to system_message.
As an example:
prompt = app.conversational_agent.agent.create_prompt(
prefix=prefix,
tools=app.tools,
)
app.conversational_agent.agent.llm_chain.prompt = prompt
Hi @KeshavSingh29 , Can you help me out to pass the additional variable/flag while calling RetrievalQA chain agent tool? I don't want my action_input text get converted into english always automatically.
Please answer this one!
@sebduerr @siddhijain47
I guess your main objective is not to have action_input
text to get converted into a specific language automatically.
There are two ways I achieved this functionality with almost 100% efficiency.
description
class variable.
For example:
description = f"""
Tool to answer about something specific.
Input should be JSON in the following format: {request_format}"""
I fiddled around with different request_format and found one with json to be best.
eg: request_format = '{{"QUESTION": "<input_question>"}}'
This helps keep the action_input
always in the same language.
One issue with this is, since Langchain basically is asking OpenAI to write these dynamic prompts, sometimes the action input can be summarized automatically if your tool input text is too long. To get around it, you will need a lot more fancy stuff like overriding etc.
prefix
shall mention something like, use only Japanese language to reply. It should work well. Below is a working case, I can use Japanese text as is for action input.
Feature request
TLDR: Working on using
chat-conversational-react-description
agent andRetrievalQA
as tool to answer queries using vectorDB.Issue: If question is asked in japanese (Vectordb is in japanese as well), the agent's initial
action_input
is complete nonsensical (agent automatically translated it to english) which results in wrong final answer.Request: (1) It would be helpful to somehow manipulate the
action_input
for agents to not rephrase the input queries when using vector db or prompt support of agents with different languages. (2) It would be more helpful to have someway to see what knowledge the agent is using. Currently, I need to rely on only passing user query toRetrievalQA_chain
withreturn_source_documents=True
to check.Code for reference:
Output: The top json output is when calling retriever directly on user query. Latter part is output when initializing agent.
Motivation
Its inconvenient to not be able to manipulate what the agents initial action_inputs are. Plus other languages can greatly benefit from such support.
Your contribution
I would like to hear from other people first and then make a PR.