Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
But in practice, the typical parameters can be quite different for different tasks. For example, we would typically want temperature = 0 for highly technical tasks like classification or QA. Meanwhile, we typically want penalties to be 0 for most use cases, except for tasks like summarization, where we typically want nonzero penalties.
It would greatly improve out-of-the-box accuracy if we specified different default parameters for each individual template.
Currently, these are the default model parameters if the user doesn't specify them
But in practice, the typical parameters can be quite different for different tasks. For example, we would typically want
temperature = 0
for highly technical tasks like classification or QA. Meanwhile, we typically want penalties to be 0 for most use cases, except for tasks like summarization, where we typically want nonzero penalties.It would greatly improve out-of-the-box accuracy if we specified different default parameters for each individual template.