Closed sashokbg closed 6 months ago
Sorry that I have litter knowledge about lmql, could you please give a dummy example about your expectation when using promptflow?
Hello @brynn-code, of course. LMQL is a query-like language that is deployed in front of your LLM server.
It allows you to do essentially:
Here are a few examples:
argmax
# review to be analyzed
review = """We had a great stay. Hiking in the mountains was fabulous and the food is really good."""
# use prompt statements to pass information to the model
"Review: {review}"
"Q: What is the underlying sentiment of this review and why?"
# template variables like [ANALYSIS] are used to generate text
"A:[ANALYSIS]" where not "\n" in ANALYSIS
# use constrained variable to produce a classification
"Based on this, the overall sentiment of the message can be considered to be[CLS]"
distribution
CLS in [" positive", " neutral", " negative"]
So in this example the [ANALYSIS] will be where the model will insert tokens using the argmax decoder, and LMQL will continue requesting tokens until it sees a \n - the where not "\n" in ANALYSIS
Or even more advanced use case where we mix some python script (if / else logic) to ask the model to perform recursively sub-analysis etc..
argmax
review = """We had a great stay. Hiking in the mountains
was fabulous and the food is really good."""
"""Review: {review}
Q: What is the underlying sentiment of this review and why?
A:[ANALYSIS]""" where not "\n" in ANALYSIS
"Based on this, the overall sentiment of the message can be considered to be[CLS]" where CLS in [" positive", " neutral", " negative"]
if CLS == " positive":
"What is it that they liked about their stay? [FURTHER_ANALYSIS]"
elif CLS == " neutral":
"What is it that could have been improved? [FURTHER_ANALYSIS]"
elif CLS == " negative":
"What is it that they did not like about their stay? [FURTHER_ANALYSIS]"
where
STOPS_AT(FURTHER_ANALYSIS, ".")
You can check the rest of the doc here:
Hi @sashokbg,
Thanks for your feedback. Actually we are considering to provide a feature named ChatGroup
. Within a chat group you can provide multiple roles(essentially flows) to practice multi-turn conversation.
In your case, we can do it like this:
consume the public group chat history
, knowing the last response from the other flow inside this chat group. stop signal
(e.g. "[STOP]") and the chat ends.The functionality decoder
and recursive chain of thought
are provided by both the simulator flow and the chat group orchestrator.
Compared with LMQL, chat group requires the origin flow to be aware of the public history and the inputs/logic need to be designed specifically, but it also provides more flexibilities in terms of orchestration since you can implement any evaluation logic inside the simulator flow not limiting to the LMQL grammar.
Please let us know if this feature meets your needs or how you think about it, thanks.
Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!
Is your feature request related to a problem? Please describe. In some cases I want to use advanced prompt engineering like forcing a given output from the LLM or using recursive chain of thought or chat memory etc.
This can easily be done using LMQL scripts.
Describe the solution you'd like Integrate *.lmql as source type for llm chat tools.