guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
19.14k stars 1.04k forks source link

Interactivity Overhaul (User Interface & Model Instrumentation & Network Comms) #1054

Open nopdive opened 1 month ago

nopdive commented 1 month ago

Interactivity Overhaul

What you want, when you want.

-- some guidance developer (circa 2024)

Screenshare of updated UI in Jupyter notebook

Overview

This PR is the first of many focusing on interactivity. It introduces an updated user interface for notebooks, new instrumentation for models, and a respective network layer to handle bidirectional communication between the IPython kernel and JavaScript client. To further support this, models have reworked rendering, added tracing logic to better support replays where required.

This PR also functions as a foundational step towards near future work including rendering across various environments (i.e. terminal support as TUI and append-only outputs), upgraded benchmarking and model inspection.

TL;DR

We added a lot of code to support better model metrics and visualization. We are getting ready for multimedia streaming, and want to have users deep inspect all the models, without overheating the computer.

Acknowledgements

Big shoutouts to:

Running this PR

User Interface

Design principle: All visibility. No magic.

Overall we're trying to show as much as we can on model outputs. When debugging outputs, there can be real ugliness that is often hidden away including tokenization concerns and critical points that may dictate the rest of the output. This need for inspection increases as users begin to define their own structured decoding grammars, unexpected overconstraints can occur in development.

The old user interface that displays HTML as a side-effect in notebooks when models compute, have been replaced with a custom Jupyter Widget (see Network Communications for more detail), of which hosts an interactive sandboxed iframe. We still support a legacy mode, if users desire the previous UI.

Before

image

After

image

We're getting more information to the output at the expense of less text density. There is simply more going on, and in order to keep some legibility we've increased text size and spacing, compensating for two visual elements (highlighting and underlines) that are used to convey token info for scanning. A general metrics bar is also displayed for discoverability on token reduction and other efficiency metrics relevant when prompt engineering for reduced costs.

When users want further detail on tokens, we support a tool tip that contains top 5 alternate token candidates alongside exact values for visual elements. Highlighting has been applied to candidates, accentuating tokens that include spaces.

We use a mono-space typeface such that data format outputs can be inspected quicker (i.e. verticality can matter for balancing braces and indentation).

As users learn a system: a UI with easier discoverability can come at the cost of productivity. We've made all visual components optional to keep our power users in the flow, and in the future we intend to allow users to define defaults to fully support this.

For legacy mode (modeled after previous UI). Users can execute guidance.legacy_mode(True) at the start of their notebook. image Old school cool.

The Code

Instrumentation

Instrumentation is key for model inspection, debugging and cost-sensitive prompt engineering. This includes backing the new UI. Metrics are now collected for both general compute resources (CPU/GPU/RAM) and model tokens (including token counts/reduction, latency, type, backtracking).

The Code

Network Communications

We have two emerging requirements that will impact future guidance development. One, the emergence of streaming multimedia around language models (audio/video). Two, user interactivity within the UI, requesting more data or computation that may not be feasible to rpre-(?:fetch|calculate) to a static client.

For user interactivity from UI to Python, it's also important that we cover as many notebook environments as possible. Each cloud notebook provider has their own quirks of which complicates client development. Some providers love resizing cell outputs indefinitely, others refuse to display HTML unless it's secured away in an isolated iframe.

All in all, we need a solution that is isolated, somewhat available across providers and can allow streams of messages between server (Jupyter Python kernel) and client (cell output with a touch of JS).

Stitch

It's 3:15AM, bi-directional comms was a mistake.

-- some guidance developer, minutes prior to passing out (circa 2024)

stitch is an auxiliary package we've created, that handles bi-directional communication between a web client and a Jupyter python kernel. It does this by creating a thin custom Jupyter widget that handles messages between the kernel and a sandboxed iframe hosting the web client. It looks something like this:

python code -> kernel-side jupyter widget -> kernel comms (ZMQ) -> client-side jupyter widget -> window event message -> sandboxed iframe -> web client (graphpaper-inline)

This package drives messages between guidance.visual module and graphpaper-inline client. All messages are streamed to allow near-real-time rendering within a notebook. Bi-directional comms is used to repair the display if initial messages have been missed (client will request a full replay when it notices the first message it receives has a non-zero identifier).

The Code

Future work

We wanted to shoot for the stars, and ended up in the ocean. The following will occur after this PR.

Near future tasks:

hudson-ai commented 1 month ago

So stoked about this @nopdive! Will start reviewing today. Anywhere in particular you feel needs a closer look?

hudson-ai commented 1 month ago

So stoked about this @nopdive! Will start reviewing today. Anywhere in particular you feel needs a closer look?

I'm going to assume the modifications to the parser and engine call loop are one such place I should spend some time with :)

BTW, what's the plan for packaging this? Is stitch standalone to be broken out into its own package? Will users be able to opt out of the additional dependencies if all they want is to use guidance as a library rather than as a visual runtime?

Harsha-Nori commented 1 month ago

I'd love for rough focus areas to be:

@hudson-ai on Loc's instrumentation and changes within the model <> parser interop layers @paulbkoch on general efficiency/memory leaks/etc, and on the correctness of instrumentation (I.e. are we presenting and displaying the right information, specifically around utilization metrics) @riedgar-ms to think about a longer term visual testing strategy here -- don't think we need to gate the PR on this. Sam might have thoughts. @nking-1 , time permitting, on the client side JS pieces.

Of course, every one should feel free to play around with the whole PR, report bugs as we find them, etc. :).

nopdive commented 1 month ago

So stoked about this @nopdive! Will start reviewing today. Anywhere in particular you feel needs a closer look?

I'm going to assume the modifications to the parser and engine call loop are one such place I should spend some time with :)

BTW, what's the plan for packaging this? Is stitch standalone to be broken out into its own package? Will users be able to opt out of the additional dependencies if all they want is to use guidance as a library rather than as a visual runtime?

Glad you're excited! As @Harsha-Nori mentioned, eyes on instrumentation and model changes regarding interop would be great. We've cut down the visual aspects on Model._state (I'm under the impression HTML rendering was the sole consumer but I could be wrong), I'm not sure if that affects other sections of the code including parser interop.

TBD, we need it specific to guidance but from an infra standpoint we'll need a separate PyPi package for it anyway. Good question on additional dependencies. I err towards a guidance-core that has zero dependencies of which developers can provide their own dependencies against, whereas guidance declares key dependencies for general usage. It's a bigger problem then the visualization component of course.

hudson-ai commented 1 month ago

Feature request: in the case where constraints give us only low-probability tokens to choose from, it would be really nice if we could see a few other valid tokens (if there are any). I think this would give a clearer picture of what the constraint is actually doing. Currently it'll only show four high-probability (but invalid) tokens alongside the one that was actually generated.

JC1DA commented 1 month ago

I think this would give a clearer picture of what the co

We have both constrained & unconstrained probs for generated tokens, so it could be done. In that case, should we show less (invalid unconstrained tokens) and add some more (valid constrained tokens) with lower probs than the generated one? I would like to keep numbers of tokens on the UI low, otherwise, it's kind of confusion to users. Thoughts?

hudson-ai commented 1 month ago

We have both constrained & unconstrained probs for generated tokens, so it could be done. In that case, should we show less (invalid unconstrained tokens) and add some more (valid constrained tokens) with lower probs than the generated one? I would like to keep numbers of tokens on the UI low, otherwise, it's kind of confusion to users. Thoughts?

Not really sure what the right UX is here. But in my opinion, showing valid tokens is more helpful than showing invalid tokens, so I would personally prioritize those. Any other opinions in the room? @Harsha-Nori ?

nopdive commented 1 month ago

We have both constrained & unconstrained probs for generated tokens, so it could be done. In that case, should we show less (invalid unconstrained tokens) and add some more (valid constrained tokens) with lower probs than the generated one? I would like to keep numbers of tokens on the UI low, otherwise, it's kind of confusion to users. Thoughts?

Not really sure what the right UX is here. But in my opinion, showing valid tokens is more helpful than showing invalid tokens, so I would personally prioritize those. Any other opinions in the room? @Harsha-Nori ?

Reasonable use case, I think it's worth considering. Ideally we're 5 +/- 2 in entries (for UI tooltip, not API access). We can separate top (K-n) tokens and have n-1 neighboring/successive tokens near selected token visually.