Closed tenzin3 closed 5 months ago
In the process of experimenting with prompt changes, new and better questions were used. The content of "The Art of Happiness at Work" by the Dalai Lama was divided into four parts, each containing text with a 15k token limit. After inputting the text into ChatGPT along with the prompt:
The book title is "The Art of Happiness at Work" by Dalai Lama. Keeping the book title in mind, generate 20 questions from the given text.
This process generated a total of 80 questions from the four parts of the text.
Sending context in bullet points rather than a whole passage.
template = (
"""
Answer the user query based on the source documents.
Here are the source documents: {context_str}
You should provide your answer as a JSON blob, and also provide all relevant snippets from the contexts on which you directly based your answer, and a confidence score as a float between 0 and 1.
The snippets MUST be extracted from the context, with the exact same wording and spelling.
Your answer should be built as follows,
{{
"answer": your_answer,
"confidence_score": your_confidence_score,
"source_snippets": ["snippet_1", "snippet_2", ...]
}}
Now begin!
Here is the user question: {query_str}.
Answer:
"""
)
prompt = f"""
Answer the user query based on the source documents.
Here are the source documents: {context}
Your task is divided into two parts:
1. **Get the Answer:**
- Provide a concise and precise answer to the user's question based on the given contexts.
- If you do not have enough data to provide an answer, respond with "I don't have enough data to provide an answer."
2. **Find the Source Snippets:**
- Provide all relevant snippets from the contexts on which you directly based your answer.
- Each snippet must be extracted from the context with the exact same wording and spelling.
Your response should be structured as a JSON blob:
{{
"answer": "your_answer",
"source_snippets": ["snippet_1", "snippet_2", ...]
}}
Now begin!
Here is the user question: {question}.
Answer:
"""
prompt = f"""
Answer the user query based on the source documents.
Here are the source documents: {context}
Your task is divided into two parts:
1. **Get the Answer:**
- Provide a concise and precise answer to the user's question based on the given contexts.
- If you do not have enough data to provide an answer, respond with "I don't have enough data to provide an answer."
2. **Find the Source Snippets:**
- Provide all relevant snippets from the contexts on which you directly based your answer.
- Each snippet must be extracted from the context with the exact same wording and spelling.
Your response should be structured as follows:
Answer:
your_answer
Source Snippets:
snippet_1
snippet_2
...
Now begin!
Here is the user question: {question}.
Answer:
"""
llm gave answers to majority of the questions but in some question, it is not giving proper answer.
question = "How does the Dalai Lama explain the importance of training the mind and reshaping one's attitudes toward work?"
with context = 3
template = (
"""
Answer the user query based on the source documents.
Here are the source documents:\n {context_str}
You should provide your answer as a JSON blob, and also provide all relevant snippets from the contexts on which you directly based your answer, along with their sources and a confidence score as a float between 0 and 1.
The snippets MUST be extracted from the context, with the exact same wording and spelling.
Your answer should be built as follows,
{{
"answer": your_answer,
"confidence_score": your_confidence_score,
"source_snippets": [
{{
"snippet": "snippet_1",
"source": "source_1"
}},
{{
"snippet": "snippet_2",
"source": "source_2"
}},
...
]
}}
Now begin!
Here is the user question: {query_str}.
Answer:
"""
)
context_text = ""
for idx,retrieved_node in enumerate(retrieved_nodes):
context += f"Source: context number {idx+1}, Source content: {retrieved_node.get_content()},\n\n"
context_text += retrieved_node.get_content()
Though with this template it was generating answer for most query, or some of the question it was not giving proper response later resulting in not properly extract the answer text from the json string.
Example Question: What does the Dalai Lama mean when he says he 'does nothing'?" facing problem here
prompt = f"""
You are a chatbot designed to answer questions using content from the Dalai Lama's books.
Follow these guidelines:
- Answer the question based on the given contexts (some of which might be irrelevant).
- Be concise and precise.
- Only give me the answer and do not output any other words.
- Be careful of the language, ensuring it is respectful and appropriate.
- If you do not have a proper answer from the context, respond with "I dont have enough data to provide an answer."
- Do not give a response longer than 3000 tokens.
Contexts: {context}
Question: {question}
"""
Description
Depending on the selected LLM, we need to define a prompt template that conforms to RAG requirements.
Pointers
Reference
llamaindex prompts template openai prompt engineering
Expected Output
Prompt Template Example
Implementation Steps
Important Note
Unsuccessful to write prompt to get llm to return the proper context it used for text generation across all questions.