Currently, content.openai_text content provider implicitly prepends the results of query JQ query to the prompt string. This is confusing and limits the flexibility of the prompt instructions
Design
Instead of pretending the data to the prompt, users must explicitly add .query_result (if they wish so) to the prompt template string.
To simplify serialisation, toJson or toPrettyJson (docs) template functions from sprig library can be used.
For example:
content openai_text {
query = ".data.inline.facts"
prompt = <<-EOT
{{ toJson .query_result }}
Describe the facts provided in the json object.
EOT
}
Description
Currently,
content.openai_text
content provider implicitly prepends the results ofquery
JQ query to the prompt string. This is confusing and limits the flexibility of the prompt instructionsDesign
Instead of pretending the data to the prompt, users must explicitly add
.query_result
(if they wish so) to the prompt template string.To simplify serialisation,
toJson
ortoPrettyJson
(docs) template functions from sprig library can be used.For example: