Closed C-Loftus closed 3 weeks ago
Examples: on last model please make this more terse
, on clipboard responding clipped model please make this into bullet points
When the order is reversed I think we need on
and responding
with the textSource and responseMethod since otherwise it is a bit difficult to parse mentally. (i.e. last windowed model please make this more terse
is a bit odd imo)
@jaresty with this grammar you can essentially have the model have state and respond in any fashion you please
i.e. in the context of coding
responding clipped model please create a react component for a navbar
on last responding windowed model please make that more terse with dark mode styling
in the context of writing a research paper
on this responding verbal model please explain if there are any inconsistencies with my thinking
This looks really powerful. I'll give it a test run when I can. Excited! Thanks.
Should we change the model blend grammar to match this too?
Should we change the model blend grammar to match this too?
I think that is out of scope for this PR at least on my end. It could fit in the grammar but you would have to do some if statement checking in python that gets a little messy/tedious to match all the cases.
I'm mostly thinking it would make sense to just change the "model blend clip" into "on clip model blend" to reduce interference. It wouldn't have to handle all cases, just to avoid having to keep a different mental model for the one command.
There's something bit weird about the way it is matching. I didn't expect it to match this: on section responding at dot model summarize
but it did. Is that expected behavior? I was trying to say: on selection responding windowed model summarize
. I later realized that I should have been saying on this responding windowed model summarize
. When I did that it worked really well-nice work!
I played with it a bit and it's great. I really loved the power of being able to pipeline transformations. I ran into a couple of issues that are somewhat orthogonal to this pull request:
One thought on the grammar with respect to cursorless. In cursorless we usually say "verb target to destination". This grammar is essentially "on target responding destination verb". It might be more cursorless-like to reorder it. You could probably even use the cursorless grammar and just have these act as virtual targets/destinations. Something like: "model summarize clip/last to clip/window"
There's something bit weird about the way it is matching. I didn't expect it to match this:
on section responding at dot model summarize
but it did. Is that expected behavior? I was trying to say:on selection responding windowed model summarize
. I later realized that I should have been sayingon this responding windowed model summarize
. When I did that it worked really well-nice work!
Should be fixed now. Seems like it matched a cursorless tag. Moved the cursorless target into the cursorless specific file.
One thought on the grammar with respect to cursorless. In cursorless we usually say "verb target to destination". This grammar is essentially "on target responding destination verb". It might be more cursorless-like to reorder it. You could probably even use the cursorless grammar and just have these act as virtual targets/destinations. Something like: "model summarize clip/last to clip/window"
One of my goals with pipelining is to allow easier use of model please for arbitrary requests. (and integrate it better alongside the static prompts in a format that is more consistent)
Model please doesn't flow particularly well if you have arbitrary user requested text in the middle, and then have to break out of it at the very end of the phrase to say a specific keyword. That was my main justification for it is at the end.
Got it. That is a good point. I'm curious to hear thoughts from @pokey on this one. I'm also interested in adding a way to add additional textual context to every prompt, so this grammar does solve that nicely.
Fwiw, one way you could use cursorless grammar and allow for adding arbitrary text at the end could be to use a conjunction like "and". Example:
Pokey and I discussed this a lot today.
model please
and model ask
into the modelPrompt
capture so they work like the others.model take last
Some of these might alter some workflows but Pokey and I agreed that we wanted to support a grammar that mimics the action verb source target
grammar of cursorless, reduces cognitive load, and generally works for the average user.
Quick note: "model take that" would be more cursorless-like than "model take last"
Yeah honestly I think I'm going to change it to be model take response
anyways since I think I want to refer to the last dictated phrase from Talon.
Pokey and I did discuss this during the meet up and I thought that
was a bit ambiguous particularly in the context where we have the response as well as the last dictated phrase
I think I like "last" since it also is used as a target
Btw, I was using selected/below quite a bit. Hope there's something analogous in the new grammar
Aside-it would be nice if there were a way to view the clipboard contents in a browser window (model view clipboard?) since we sometimes put things into it without ever seeing them.
Btw, I was using selected/below quite a bit. Hope there's something analogous in the new grammar
Below is the same. Pokey recommended removing selected and just using model take last
since selected isn't a destination so it doesn't fit in the grammar. Most users also don't know if they want it to be selected until after it is pasted.
I think for the time being if you want to select it in advance it would make sense to create a simple custom command for it. In the future we can think about another PR for destination modifiers
If you feel strongly and you really need selected to be a part of the destination grammar then I can just keep it in though
I do really appreciate having it-it acts almost like a working memory. Have you ever seen a clipboard ring? It reminds me of that (example: https://marketplace.visualstudio.com/items?itemName=SirTobi.code-clip-ring)
Ideally, I think it would be an optional addendum to pasted destinations like before, after, and pasting with no modifiers, since it is more of a behavioral modification than a destination, really. I'm ok making do with whatever you decide, however. I had been relying on it as a way to transform the last prompt result while I can still look at it (since last and clipboard end up invisible to the user)
For the time being I think I would prefer if you use a custom command for selection in your repo. i.e. the one below
I will discuss this with Pokey next meetup. Just want to get this merged since I am not sure about how I want to do modifiers right now. (i.e. before the destination or after the destination and what other modifiers might be reasonable, if any.)
model <user.modelPrompt> [{user.modelSource}] [{user.modelDestination}] selected$:
text = user.gpt_get_source_text(modelSource or "")
result = user.gpt_apply_prompt(modelPrompt, text)
user.gpt_insert_response(result, modelDestination or "")
user.gpt_select_last()
You can also just chain it by saying model X model take response
which is more verbose but makes it so we don't need to introduce new parts into the grammar.
Ok. I'll use that.
We also now have pass
as an action that doesn't do any transformation and just passed the raw source to the target. So model pass to speech
or model pass to browser
both work to visualize / verbalize info.
Merging this. I will talk with Pokey again in the future and get his feedback and then we can iterate if you are unhappy or thing the grammar should be made more complex. I've myself refrained from adding some things that might be useful for my own workflow just since I am trying to make the repo a bit more generalized and fitting to a clear, predictable grammar. Hope this tradeoff is reasonable.
last
as a sourceverbal
as a response option if TTS is installedbrowser
towindowed
to fit the adverb grammar betteruser.text
at the end, allowing for easier chaining and better recognition.