Open utterances-bot opened 6 months ago
This is a solid and exceptionally valid piece of work. While most of us more proficient writers understand COT for LLM prompting... Its so hard to train this to a wider demographic outside of the bubble... because of that so many experience LameLLMFatique where, the LLM just produces trash that they could have written better.
Go stuff...
--TheAIMogul (micahberkley.com)
This is an incredible piece of work, thank you for publishing it!
Have you explored using an LLM step to rate a previous step’s output with recommendations, and then the subsequent step follows through on those recommendations?
From a brief attempt at this I’ve seen some potential value there, would love to know if this is a technique you’ve heard of or used yourself.
I would love to hear your thoughts on metaprompt available in anthropic docs. It does construct a chain of thought instruction for the task given but I wondered whether there is any improvements cause of the lack of benchmarks for this. https://docs.anthropic.com/en/docs/helper-metaprompt-experimental
Have you explored using an LLM step to rate a previous step’s output with recommendations, and then the subsequent step follows through on those recommendations? — @joetann
Agreed on it being useful. That's the second step here where an LLM step helps with validation, filtering, and fixing.
It does construct a chain of thought instruction for the task given but I wondered whether there is any improvements cause of the lack of benchmarks for this. — @kailashsp
I think metaprompting a CoT is useful. That said, for most tasks, after I've looked at dozens (sometimes hundreds) of inputs and outputs, I usually have a good sense of what the "shape" of the prompt needs to be, and prefer to roll my own.
Can the "prefill response" technique be replaced by a prompt such as "please begin with blah blah"?
I think that prompt would work decently, but doesn't guarantee the output will always start with "blah blah". Maybe fail about 0.1 - 1% of the time at large scale. With prefilling though, the following would work 100% of the time.
messages=[
{
"role": "user",
"content": input,
},
{
"role": "assistant",
"content": "blah blah" # Prefilled response
}
]
Thanks! Great piece of work. I have a question about the temperature. It is possible to set temperature higher than 1. You can even set it to 10 in theory, but that will make the output unusable, right? OpenAI decided to cap it at 2.0. What is the reason for saying temperature is a value between 0.0 and 1.0?
Here is my understanding: When an LLM generates text, it outputs a set of logits for each possible next token. These logits are raw scores that can be converted into probabilities. Temperature Scaling: The temperature T modifies the logits before they are passed through the softmax function. scaled_logit = logit / temperature
High Temperature (T > 1): When the temperature is high, the logits are divided by a large number, resulting in smaller differences between them. This makes the probability distribution more uniform, meaning the model will sample from a wider range of tokens, leading to more random and diverse outputs.
Low Temperature (0 < T < 1): When the temperature is low, the logits are divided by a small number, amplifying the differences between them. The model will favor the tokens with higher original logits, making the output more deterministic and conservative.
Temperature of 1 (T = 1): When the temperature is set to 1, the logits remain unchanged. The output probabilities directly reflect the logits provided by the model without any scaling, offering a balance between randomness and determinism.
Hi, Eugene. Big fan of yourself and the gang. The info you've put out has been invaluable to me. Thanks so much for that!
Regarding using "Please" in prompts. I find it helps when the LLM is hesitant because it lacks confidence in the info it might provide.
I have some tasks where I need to leverage GPT's information on entities such as companies, and have had to find a balance to ensure I get reliable info.
Sometimes it will be hesitant, even though the information is accurate. Situations like these are where using "Please" has helped me.
Hi Eugene, Thank you for this great post. Would you mind sharing a few pointers to the <sketchpad>
idea regarding CoT? I think I got the main idea, but I'd like to learn more about the exact implementation especially with other LLM providers.
Thank you very much!
Prompting Fundamentals and How to Apply them Effectively
Structured input/output, prefilling, n-shots prompting, chain-of-thought, reducing hallucinations, etc.
https://eugeneyan.com/writing/prompting/