Closed AdamSobieski closed 1 year ago
It is already supported; follow the pattern example for constraining to numerical values: https://github.com/guidance-ai/guidance/blob/main/notebooks/pattern_guides.ipynb and the template example: https://github.com/guidance-ai/guidance/blob/main/notebooks/anachronism.ipynb
@jadermcs, thank you, I will take a closer look at those notebooks.
@jadermcs, thank you. I found that the calling functions example clarified Guidance's capability to reuse outputs from the gen
tag in subsequent templates:
def aggregate(best):
return '\n'.join(['- ' + x for x in best])
prompt = guidance('''The best thing about the beach is {{~gen 'best' n=3 temperature=0.7 max_tokens=7 hidden=True}}
{{aggregate best}}''')
prompt = prompt(aggregate=aggregate)
prompt
Hello. I would like to ask about how best to express some described functionalities with Guidance and to request them as new features if the functionalities aren't already possible.
The following pseudocode intends to ask an LLM to provide a random animal and then to ask it how many legs that kind of animal has and then to generate a sentence with these data.
An example of the desired output is:
The cat has four legs.
For these purposes, let us also consider a function,
articulate
, which, resembling a form of machine translation or summarization, utilizes an LLM with a template and then invokes an LLM, again, to touch-up or rephase the resultant content, a provided gist, into grammatically correct natural language output.Interestingly, client-side SPARQL could be provided in templates with which to access graph-based data in external knowledgebases in coordination with LLM-generated content.
Similarly, SPARQL-produced content could be referenced using variables and subsequently provided to LLMs in prompt templates.
Beyond generating declarative sentences, these techniques could be used to generate natural-language questions.
Scenarios of interest for these indicated functionalities include computer-aided and automatic item generation [1][2]. In these regards, items could be more efficiently produced with which to evaluate AI systems and LLMs.
While, using LLMs, one can use natural language to provide content, e.g., textbook content, and then ask for questions and, perhaps, question design rationale, about that provided content, under discussion, here, are ways that templates and LLMs, together, could be utilized to generate items.
Summarizing the above pseudocode examples, I hope that the following four topics are of some interest to the Guidance team and developer community:
In conclusion, how might developers best create and utilize variables, as described above, or otherwise refer to previous template-generated in subsequent templates using Guidance? Are the pseudocode examples, above, possible with Guidance? Thank you.
References
[1] Laverghetta Jr, Antonio, and John Licato. "Generating better items for cognitive assessments using large language models." (2023).
[2] Olney, Andrew M. "Generating multiple choice questions from a textbook: LLMs match human performance on most metrics." In AIED Workshops. 2023.