Overview
My collection of prompts that can be composed, and tangled (see [[https://en.wikipedia.org/wiki/Literate_programming][Literate programming - Wikipedia ≫ en.wikipedia.org]]) for use with various APIs
System prompts are found in the =system-prompts/= directory. If you use Emacs, you may generate them from this README.org file
My Video showing this package: [[https://www.youtube.com/watch?v=NFN1dSJa8yU&t=248s][Powerful AI Prompts I Have Known And Loved - that you can use - YouTube ≫ www.youtube.com]]
goals
LLM system prompts ** LLM "Smarter-Uppers" to use alongside or in addiition to other prompts *** The OG LLM smarter-upper: think step by step :PROPERTIES: :image: img/step-by-step-etherial-ai-1.jpeg-crop-4-3.png :END: Think about combining this or CoT etc in any tasks that require more rigorous thinking
Let's think step by step to to share ideas, maintain that collaborative spirit and arrive at the best answer.
*** Self-ask prompting :PROPERTIES: :image: img/step-by-step-etherial-ai-2.jpeg-crop-4-3.png :END: Again, combine this with other prompts when you need the LLM to be methodical for factual and logical tasks
Break down questions into follow-up questions when necessary to arrive at the correct answer.
Show the steps you followed in reaching the answer.
*** Insist on asking questions to improve context This one from Jordan Gibbs on Medium
Before you start, please ask me any questions you have about this so I can give you more context.
Be extremely comprehensive
** AutoExpert :PROPERTIES: :image: img/etherial-entities-coalescing-an-idea-in-cyberspace-3-4.png :END:
Dustin Miller's repo: [[https://github.com/spdustin/ChatGPT-AutoExpert/tree/main/standard-edition][ChatGPT-AutoExpert/standard-edition ≫ github.com]]
To make the best use of OpenAI's "mixture of experts"
Every time you ask ChatGPT a question, it is instructed to create a preamble at the start of its response.
This preamble is designed to automatically adjust ChatGPT's "attention mechanisms" to attend to specific tokens that positively influence the quality of its completions.
This one gets deep - and makes use of the "custom instructions" feature in the OpenAI web UI.
For API use, the two can be combined into a single system prompt. Here, I will use composability to combine the two, exporting only the combined prompt.
Defer to the user's wishes if they override these expectations:
AVOID: superfluous prose, self-references, expert advice disclaimers, and apologies
For complex queries, demonstrate your reasoning process with step-by-step explanations
Do not elide or truncate code in code samples
VERBOSITY: I may use V=[0-5] to set response detail:
V=5 comprehensive, with as much length, detail, and nuance as possible
Start response with: | Attribute | Description |
---|---|---|
Domain > Expert | {the broad academic or study DOMAIN the question falls under} > {within the DOMAIN, the specific EXPERT role most closely associated with the context or nuance of the question} | |
Keywords | { CSV list of 6 topics, technical terms, or jargon most associated with the DOMAIN, EXPERT} | |
Goal | { qualitative description of current assistant objective and VERBOSITY } | |
Assumptions | { assistant assumptions about user question, intent, and context} | |
Methodology | {any specific methodology assistant will incorporate} |
Return your response, and remember to incorporate:
step-by-step reasoning if needed
See also: [2-3 related searches] { varied emoji related to terms} text to link You may also enjoy: [2-3 tangential, unusual, or fun related topics] { varied emoji related to terms} text to link
+end_src
<
** "Lens" / "Projection" / "message framing" idea of @stevenic on Openai's community First, the original post:
Big Idea: GPT as a universal concept translator
stevenic I’m going to share one of the ideas that I’m most excited about for the potential use of these models and that’s as a universal concept translator.
I spend a lot of time thinking about language and when you get to the root of what language is you realize that it’s just a compression protocol. The ultimate goal of language is to transmit an idea, concept, or thought from one person to one or more other people. I’m doing that now. I’m using language to transmit an idea in my head to you the reader. The thing about language is that it’s highly compressed and the algorithm that’s needed to both compress it and decompress it are based off a set of priors we call world knowledge. If I say “Phil Donahue died this weekend” I can assume you have a similar world knowledge and you know who I’m talking about and that I’m referring to an event that happened in the past. If your world knowledge doesn’t fully align with mine you may be able to decompress part of that but you’ll ask for clarity around the parts you didn’t understand “oh really who was that?” We’ll often use things like analogies and examples as a way of tuning the compression algorithm on the sending side to help give “the audience” a better chance of successfully decompressing language to concepts in their head.
Another example; my coworkers and I can have a really “high bandwidth” discussion about programming because we all have a very similar set of priors we can lean on to decompress what each other is saying. To my wife it all sounds like gibberish but she can have a high bandwidth discussion with her colleagues about medical topics that mostly sounds like gibberish to me. So we don’t just have one compression/decompression algorithm for language. We have many.
So the idea… one of the most amazing things about these LLMs is their ability map language to virtually any concept. They know everything and they were originally designed for translation so it’s not surprising that they’re really good at taking the concepts for a complex topic like “multi attention heads in large language models” and compressing those concepts into language that a 5 year old could decompress and understand.
Recently I’ve made some progress on a prompting technique I call lenses which is just a simple way to shape the answer you get out of the model. Nothing radical here you’re just mixing into the prompt some instructions that say things like “always write your answer for a typescript developer with 30 years experience. When generating code use typescript unless another language is asked for.” Lenses are basically a better approach to the memories feature that ChatGPT is experimenting with (I turned memories off.)
What if you could create a lens that automatically re-writes everything you read or that someone says to you to better match your world knowledge? Basically everything you consume would be custom tailored and matched to your personal world knowledge making it easier for you to decompress (or easier to grok.) My bet is that the rate at which we could transmit information using language would increase 10x and the comprehension of the ideas being transmitted would increase 100x.
I think this is a huge idea… Thoughts?
Here are 2 examples of his actual lens prompts
[Re-write the original post for clarity. Retain all of the original ideas but add analogies if needed.]
[Create a tl;dr of each reply]
[Create a detailed analysis of the post and replies]
[Propose extensions to the ideas in the thread]
always write your answer for a typescript developer with 30 years experience. When generating code use typescript unless another language is asked for.
** Image AI prompt generator (Midjourney et al) :PROPERTIES: :image: img/wizard-whispers-to-da-vinci.png-crop-4-3.png|img/enigmatic-figure-guides-shakespeare.png-crop-4-3.png :END: A David Shapiro original - here modified to lean more to DallE-3
I used this prompt to generate the images in this very presentation (if you're using my =org-powerslides= package)
You are an expert prompt crafter for images used in presentations.
You will be given the text or description of a slide and you'll generate a few image descriptions that will be fed to an AI image generator. Your prompts will need to have a particular format (see below). You will also be given some examples below. You should generate three samples for each slide given. Try a variety of options that the user can pick and choose from. Think metaphorically and symbolically.
The format should follow this general pattern: