h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.19k stars 1.23k forks source link

classic pre-prompts #969

Closed pseudotensor closed 8 months ago

pseudotensor commented 10 months ago

https://arxiv.org/abs/2309.03409

Summarization:

pseudotensor commented 10 months ago

In general alot of papers publish things about doing some complex iterative scheme, and in end improvements are unclear in reality and just was some minimal publishing unit.

An example of this is https://arxiv.org/abs/2309.16797 from google deepmind (I agree with https://www.youtube.com/watch?v=tkX0EfNl4Fc).

pseudotensor commented 10 months ago

https://www.reddit.com/r/LocalLLaMA/comments/16p8c1s/best_way_to_get_perfect_answers_from_llama_2_for/

codellama:

system prompt:

You are an API based on a large language model, answering user request as valid JSON only.

And the prompt itself :

Answer the following question : 
What is climate change? 
using only the facts from these articles:

---
article 45: Joe Biden is the new President of the United States. He was just elected today.
---
article 56: How to cook chicken. To cook chicken, use the microwave. 1 minute per kilo of meat.
---
article 78: Climate change impacts Europe. The northern hemisphere is experiencing long heat waves and accompanying fires and natural disasters.
---

Answer the query using the following template :
{
  "success" : boolean, // if you could sucessfully answer the question using only the context provided
  "response" : string, // the response to the user's question
  "references" : array // the ids of the articles you used to answer the user question. Do not mentions articles not relevant for the user query.
}

If the articles do not contain the information needed, return false in the success key, and provide the response : "I cannot answer that query."

Example 1:
query : who is Joe Biden?
response : {"success" : true, "response" : "Joe Biden is the new President of the United States.", "references" : [45]}

Example 2:
query : who do I cook pork?
response : {"success" : false, "response" : "I cannot answer that query.", "references" : []}

Now answer the question : <query>

I'm getting :

 { "success" : true, "response" : "Climate change refers to the long term changes in the Earth's climate, which can be caused by a variety of factors, including the burning of fossil fuels, deforestation, and changes in land use., "references" : [78] }
gabriead commented 10 months ago

Hi @pseudotensor does the above template have to be wrapped into the Llama-2 specific syntax? How would that look like?

pseudotensor commented 10 months ago

Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models

https://arxiv.org/abs/2310.06117 https://gooey.ai/SmartGPT/ https://www.reddit.com/r/ChatGPT/comments/14qmzlt/solve_the_laundy_problem/

image

image

image

image

Question: I left 5 clothes to dry out in the sun. It took them 5 hours to dry completely. How long would it take to try 30 clothes?
You are a mathematician or physicist.  List a much more general abstract versions of this question, then describe the situation using your imagination ensuring not to over-constrain the problem, then explore in a list all the possible different constraints or lack of constraints (be sure to consider from a human viewpoint) relevant for the circumstance, then explore in a list the many extreme possibilities for issues. Finally, let's work this out in a step-by-step way to be sure we have the right answer. Make a final best guess using common sense.

image

system prompt:

Question: I left 5 clothes to dry out in the sun. It took them 5 hours to dry completely. How long would it take to try 30 clothes?
You are a mathematician or physicist. List a much more general abstract versions of this question, then describe the situation using your imagination ensuring not to over-constrain the problem, then explore in a list all the possible different constraints or lack of constraints (be sure to consider from a human viewpoint) relevant for the circumstance, then explore in a list the many extreme possibilities for issues. Let's work this out in a well-structured step-by-step thoughtful way to be sure we have the right answer. Make a final best guess using common sense.

image

pseudotensor commented 10 months ago

Chain of Verification. https://medium.com/@james.li/a-langchain-implementation-of-chain-of-verification-cove-to-reduce-hallucination-0a8fa2929b2a https://colab.research.google.com/drive/10EEKZCuD6ZLqywIg_r3d7mLuQwHNINT3?usp=sharing

Chain_of_Verification.ipynb.zip

slavag commented 10 months ago

@pseudotensor Hi, I saw the commit for that issue with changes in prompter.py. How those prompts can be used ? step_back_prompts

Thanks

pseudotensor commented 10 months ago

https://arxiv.org/abs/2310.01405

LLMs respond very well to an additional prompt after asking the question part, like "This is very important to my career."

pseudotensor commented 10 months ago

https://github.com/spdustin/ChatGPT-AutoExpert/tree/main