guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
18.92k stars 1.04k forks source link

A declarative framework on the top of guidance #1051

Open AlbanPerli opened 4 days ago

AlbanPerli commented 4 days ago

Hi there!

First, I would like to thank you for this great lib that give some very interesting approach and unique.

I've build this lib build on the top of yours: https://github.com/AlbanPerli/Noema-Declarative-AI

The main goal here is to use declarative programming to easily use the interleaving and CFG, the "same" way SwiftUI or other framework help to facilitate UI programming.

This first step is an humble proposition aiming to pursue and facilitate your approach. The following steps will be on the explorations of the capabilities that offer the concept of Noesi/Noema to perform some auto-analysis/auto-correction.

This is not self-promotion, just trying to share with you my enthusiasm for your work! 😊

Do you have any roadmap for the next-step of guidance?

Thanks!

Alban

hudson-ai commented 3 days ago

@AlbanPerli this is super cool, thank you for sharing!!!

With guidance, the goal is to largely provide a user-friendly but relatively low-level interface for building LLM applications as well as higher-level interfaces on top of our stack. So it's really exciting to see users such as yourself doing exactly that.

Some features in the pipeline include UI improvements for the notebook experience, better support for constraining remote and multimodal models, providing more statistics and feedback to users in order to help them build good prompts and understand exactly how constraints are affecting outputs, and providing mechanisms for improving generation throughput, e.g. with threading- or async-based concurrency.

Out of curiosity, are there any particular pain points (or nice highlights!) that you are experiencing when using guidance? Are there any particular features that would make your life easier or unlock some superpowers for Noema and its users? Any feedback would be really useful :)

Cheers!

AlbanPerli commented 22 hours ago

Thanks for your response @hudson-ai :)

This is exactly the way I see it: guidance and CFG as a kind of « assembly language for LLM » and higher level libs using it to enable the creation of abstract task easier and faster.

As a user of guidance, most of the functionalities are really cool, efficient and VERY satisfying.

The main problem is: how to imagine new use case, as for me, it’s a new way of thinking about algorithm, by programming how to think the idea (using CFG or interleaving). This is why I build Noema. It reduce the capabilities of guidance, but (I hope) helps by simplifying the use case that can be found.

From a high-level point of view, some more examples showing CFG like the one in the guidance's ReadMe showing the arithmetic operation would be helpful. Maybe in a field less mathematic and more semantic?

The usage of remote model and/or multi-modal model would be definitively a good point for the (very few!) current users of Noema.

From the lib usage point of view, I've some difficulties with JSON and TypeAdapter: the result doesn't work as expected (from my understanding). The field or the elements are sometimes empty or filled with garbage. So, for now, to get an array of something, there is a custom CFG in Noema.

The last point is related to a specific use case of Noema: Actually, the lib build a ‘Noesis’ based on the declared task. Basically: an instruct prompt representing the declared task flow. In some specific case, I need to use the reset function of guidance to rebuild the instruct prompt with new values generated during the execution. In that case, using a stateless call doesn’t match.

tmp_copy = str(llm)
# …
# … do something with the LLM
llm.reset()
updated_prompt = update_value_in_previous_prompt(tmp_copy)
llm += updated_prompt

It can be very token consuming.

Could we get a reset function with a parameter that revert to a particular POI i.e. a named generation? From my understanding of the transformer algorithm: no. But maybe?

Thanks! :)

hudson-ai commented 18 hours ago

@AlbanPerli thanks for the feedback and ideas -- super useful!

Re: issues with JSON -- can you provide any examples of the kind of issue or unexpected behavior you've faced here? Either here or in a new issue would be awesome.

And re: this

Actually, the lib build a ‘Noesis’ based on the declared task. Basically: an instruct prompt representing the declared task flow. In some specific case, I need to use the reset function of guidance to rebuild the instruct prompt with new values generated during the execution. In that case, using a stateless call doesn’t match.

Can you explain a bit about what you mean by "using a stateless call doesn't match"? When it comes to re-prompting, we do our best to reuse as many tokens from the underlying KV cache as we can, so you hopefully shouldn't have to pay a big expense here if you are mostly modifying the suffix of a prompt. That being said, I could see that exposing some kind of "checkpointing" mechanism could be really useful if there are a few prompt states that users want to return to often. Curious to hear your thoughts/experience here.