Shopify / liquid

Liquid markup language. Safe, customer facing template language for flexible web apps.
https://shopify.github.io/liquid/
MIT License
11.05k stars 1.38k forks source link

Is it possible to implement something like Microsoft Guidance with Liquid template? #1723

Open MithrilMan opened 1 year ago

MithrilMan commented 1 year ago

Hello, I've one question: I've found a python library made from microsoft: https://github.com/microsoft/guidance In this library they have a prompt that's interleaved by "methods" that make the prompt built progressively.

Let see an example from their home page:

# connect to a chat model like GPT-4 or Vicuna
gpt4 = guidance.llms.OpenAI("gpt-4")
# vicuna = guidance.llms.transformers.Vicuna("your_path/vicuna_13B", device_map="auto")

experts = guidance('''
{{#system~}}
You are a helpful and terse assistant.
{{~/system}}

{{#user~}}
I want a response to the following question:
{{query}}
Name 3 world-class experts (past or present) who would be great at answering this?
Don't answer the question yet.
{{~/user}}

{{#assistant~}}
{{gen 'expert_names' temperature=0 max_tokens=300}}
{{~/assistant}}

{{#user~}}
Great, now please answer the question as if these experts had collaborated in writing a joint anonymous answer.
{{~/user}}

{{#assistant~}}
{{gen 'answer' temperature=0 max_tokens=500}}
{{~/assistant}}
''', llm=gpt4)

experts(query='How can I be more productive?')

they surround some text with a tag system

{{#system~}}
You are a helpful and terse assistant.
{{~/system}}

(this is possible, so move one)

Then they have another tag that specify the user prompt

{{#user~}}
I want a response to the following question:
{{query}}
Name 3 world-class experts (past or present) who would be great at answering this?
Don't answer the question yet.
{{~/user}}

Then they have an assistant block and within it there is a call to their gen method

{{#assistant~}}
{{gen 'expert_names' temperature=0 max_tokens=300}}
{{~/assistant}}

From my understanding, that library builds the text generated in the prompt so far, and use it as a LLM prompt to a specific model (this is just a technical details, what I mean is that they build the prompt up to that gen function call), then they place the result of that gen call and place it within the assistant tag, and keep build up to the next gen call, if any.

This is a very effective way to build a prompt for LLM and I was trying to understand if Liquid allows this kind of behavior out of the box, or where I should look into to implement this kind of feature.

Thanks