Open WILLOSCAR opened 2 months ago
To address your questions on modifying the agent's default behavior prompt and updating the query_engine_tool
settings, follow these steps:
You can customize the default prompt that the openAIAgent
uses during interactions by using the from_tools
method with either the system_prompt
or prefix_messages
parameter. Here are examples for both:
Using system_prompt
:
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo-0613")
agent = OpenAIAgent.from_tools(
[multiply_tool, add_tool],
llm=llm,
verbose=True,
system_prompt="Your custom system prompt here."
)
Using prefix_messages
:
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.base.llms.types import ChatMessage
llm = OpenAI(model="gpt-3.5-turbo-0613")
prefix_messages = [ChatMessage(content="You are a helpful assistant.", role="system")]
agent = OpenAIAgent.from_tools(
[multiply_tool, add_tool],
llm=llm,
verbose=True,
prefix_messages=prefix_messages,
)
Note: You cannot specify both system_prompt
and prefix_messages
simultaneously [1][2].
query_engine_tool
SettingsTo adjust the default text_qa_template
settings in the query_engine_tool
, you can define custom prompts and pass them to the query engine. Here is how you can do it:
Define Custom Prompts:
from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core import ChatPromptTemplate
# Define the custom prompt strings
qa_prompt_str = (
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
)
refine_prompt_str = (
"We have the opportunity to refine the original answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question: {query_str}. "
"If the context isn't useful, output the original answer again.\n"
"Original Answer: {existing_answer}"
)
# Create ChatPromptTemplate objects
chat_text_qa_msgs = [
ChatMessage(role=MessageRole.SYSTEM, content="Always answer the question, even if the context isn't helpful."),
ChatMessage(role=MessageRole.USER, content=qa_prompt_str),
]
text_qa_template = ChatPromptTemplate(chat_text_qa_msgs)
chat_refine_msgs = [
ChatMessage(role=MessageRole.SYSTEM, content="Always answer the question, even if the context isn't helpful."),
ChatMessage(role=MessageRole.USER, content=refine_prompt_str),
]
refine_template = ChatPromptTemplate(chat_refine_msgs)
Use the Custom Templates in the Query Engine:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
index = VectorStoreIndex.from_documents(documents)
# Query using the custom templates
print(
index.as_query_engine(
text_qa_template=text_qa_template,
refine_template=refine_template,
llm=llm,
).query("Who is Joe Biden?")
)
This setup allows you to customize the prompts used during interactions and retrievals [3][4][5].
By following these steps, you can effectively modify the agent's default behavior prompt and update the query_engine_tool
settings to suit your needs.
To continue talking to Dosu, mention @dosu.
- An openai agent has no prompts. It's all the chat history. The tools get sent directly over openais api
- You can update the query engine prompts exactly how you wrote before converting it input a tool
- Tool names and descriptions are also important, which can be set when creating the tool
Can it automatically initialize when using index.as_chat_engine()
for initialization? I looked through the source code layer by layer and it doesn't seem to be working:
chat_engine = index.as_chat_engine(
chat_mode='best',
verbose=True,
similarity_top_k=10,
# response_synthesizer=response_synthesizer,
node_postprocessors=[
LLMRerank(
choice_batch_size=2,
top_n=3,
)
],
text_qa_template=yueyu_system_prompt,
)
I would like to ask one more question: how can I force trigger a specific tool? For example, I want to call the RAG's queryEngineTool every time."
Question Validation
Question
After using
chat_engin = index.as_chat_engine()
, I discovered thatchat_engine
is anopenAIAgent
object, which behaves very differently compared toquery_engine
. Do I need to modify the tool in it? I'm looking for guidance on:query_engine_tool
settings: Specifically, how can I adjust the defaulttext_qa_template
that is used during retrievals? For example, inquery_engine
, we can update prompts like this: