C-Loftus / talon-ai-tools

Query LLMs and AI tools with voice commands
MIT License
39 stars 14 forks source link

Clean up GPT Grammar #80

Closed C-Loftus closed 3 weeks ago

C-Loftus commented 3 weeks ago
C-Loftus commented 3 weeks ago

Examples: on last model please make this more terse, on clipboard responding clipped model please make this into bullet points

When the order is reversed I think we need on and responding with the textSource and responseMethod since otherwise it is a bit difficult to parse mentally. (i.e. last windowed model please make this more terse is a bit odd imo)

C-Loftus commented 3 weeks ago

@jaresty with this grammar you can essentially have the model have state and respond in any fashion you please

i.e. in the context of coding

in the context of writing a research paper on this responding verbal model please explain if there are any inconsistencies with my thinking

jaresty commented 3 weeks ago

This looks really powerful. I'll give it a test run when I can. Excited! Thanks.

jaresty commented 3 weeks ago

Should we change the model blend grammar to match this too?

C-Loftus commented 3 weeks ago

Should we change the model blend grammar to match this too?

I think that is out of scope for this PR at least on my end. It could fit in the grammar but you would have to do some if statement checking in python that gets a little messy/tedious to match all the cases.

jaresty commented 3 weeks ago

I'm mostly thinking it would make sense to just change the "model blend clip" into "on clip model blend" to reduce interference. It wouldn't have to handle all cases, just to avoid having to keep a different mental model for the one command.

jaresty commented 3 weeks ago

There's something bit weird about the way it is matching. I didn't expect it to match this: on section responding at dot model summarize but it did. Is that expected behavior? I was trying to say: on selection responding windowed model summarize. I later realized that I should have been saying on this responding windowed model summarize. When I did that it worked really well-nice work!

jaresty commented 3 weeks ago

I played with it a bit and it's great. I really loved the power of being able to pipeline transformations. I ran into a couple of issues that are somewhat orthogonal to this pull request:

  1. The browser window doesn't format the model responses very well. I think we should display the response inside of a preformatted HTML tag so that it handles new lines in a way that is readable.
  2. I find myself wanting to respond below and selected. Maybe the selected argument should be appended to above or below instead of being an alternative.
jaresty commented 3 weeks ago

One thought on the grammar with respect to cursorless. In cursorless we usually say "verb target to destination". This grammar is essentially "on target responding destination verb". It might be more cursorless-like to reorder it. You could probably even use the cursorless grammar and just have these act as virtual targets/destinations. Something like: "model summarize clip/last to clip/window"

C-Loftus commented 3 weeks ago

There's something bit weird about the way it is matching. I didn't expect it to match this: on section responding at dot model summarize but it did. Is that expected behavior? I was trying to say: on selection responding windowed model summarize. I later realized that I should have been saying on this responding windowed model summarize. When I did that it worked really well-nice work!

Should be fixed now. Seems like it matched a cursorless tag. Moved the cursorless target into the cursorless specific file.

C-Loftus commented 3 weeks ago

One thought on the grammar with respect to cursorless. In cursorless we usually say "verb target to destination". This grammar is essentially "on target responding destination verb". It might be more cursorless-like to reorder it. You could probably even use the cursorless grammar and just have these act as virtual targets/destinations. Something like: "model summarize clip/last to clip/window"

One of my goals with pipelining is to allow easier use of model please for arbitrary requests. (and integrate it better alongside the static prompts in a format that is more consistent)

Model please doesn't flow particularly well if you have arbitrary user requested text in the middle, and then have to break out of it at the very end of the phrase to say a specific keyword. That was my main justification for it is at the end.

jaresty commented 3 weeks ago

Got it. That is a good point. I'm curious to hear thoughts from @pokey on this one. I'm also interested in adding a way to add additional textual context to every prompt, so this grammar does solve that nicely.

jaresty commented 3 weeks ago

Fwiw, one way you could use cursorless grammar and allow for adding arbitrary text at the end could be to use a conjunction like "and". Example:

C-Loftus commented 3 weeks ago

Pokey and I discussed this a lot today.

Some of these might alter some workflows but Pokey and I agreed that we wanted to support a grammar that mimics the action verb source target grammar of cursorless, reduces cognitive load, and generally works for the average user.

jaresty commented 3 weeks ago

Quick note: "model take that" would be more cursorless-like than "model take last"

C-Loftus commented 3 weeks ago

Yeah honestly I think I'm going to change it to be model take response anyways since I think I want to refer to the last dictated phrase from Talon.

Pokey and I did discuss this during the meet up and I thought that was a bit ambiguous particularly in the context where we have the response as well as the last dictated phrase

jaresty commented 3 weeks ago

I think I like "last" since it also is used as a target

jaresty commented 3 weeks ago

Btw, I was using selected/below quite a bit. Hope there's something analogous in the new grammar

jaresty commented 3 weeks ago

Aside-it would be nice if there were a way to view the clipboard contents in a browser window (model view clipboard?) since we sometimes put things into it without ever seeing them.

C-Loftus commented 3 weeks ago

Btw, I was using selected/below quite a bit. Hope there's something analogous in the new grammar

Below is the same. Pokey recommended removing selected and just using model take last since selected isn't a destination so it doesn't fit in the grammar. Most users also don't know if they want it to be selected until after it is pasted.

I think for the time being if you want to select it in advance it would make sense to create a simple custom command for it. In the future we can think about another PR for destination modifiers

C-Loftus commented 3 weeks ago

If you feel strongly and you really need selected to be a part of the destination grammar then I can just keep it in though

jaresty commented 3 weeks ago

I do really appreciate having it-it acts almost like a working memory. Have you ever seen a clipboard ring? It reminds me of that (example: https://marketplace.visualstudio.com/items?itemName=SirTobi.code-clip-ring)

jaresty commented 3 weeks ago

Ideally, I think it would be an optional addendum to pasted destinations like before, after, and pasting with no modifiers, since it is more of a behavioral modification than a destination, really. I'm ok making do with whatever you decide, however. I had been relying on it as a way to transform the last prompt result while I can still look at it (since last and clipboard end up invisible to the user)

C-Loftus commented 3 weeks ago

For the time being I think I would prefer if you use a custom command for selection in your repo. i.e. the one below
I will discuss this with Pokey next meetup. Just want to get this merged since I am not sure about how I want to do modifiers right now. (i.e. before the destination or after the destination and what other modifiers might be reasonable, if any.)

model <user.modelPrompt> [{user.modelSource}] [{user.modelDestination}] selected$:
    text = user.gpt_get_source_text(modelSource or "")
    result = user.gpt_apply_prompt(modelPrompt, text)
    user.gpt_insert_response(result, modelDestination or "")
    user.gpt_select_last()

You can also just chain it by saying model X model take response which is more verbose but makes it so we don't need to introduce new parts into the grammar.

jaresty commented 3 weeks ago

Ok. I'll use that.

C-Loftus commented 3 weeks ago

We also now have pass as an action that doesn't do any transformation and just passed the raw source to the target. So model pass to speech or model pass to browser both work to visualize / verbalize info.

C-Loftus commented 3 weeks ago

Merging this. I will talk with Pokey again in the future and get his feedback and then we can iterate if you are unhappy or thing the grammar should be made more complex. I've myself refrained from adding some things that might be useful for my own workflow just since I am trying to make the repo a bit more generalized and fitting to a clear, predictable grammar. Hope this tradeoff is reasonable.