C-Loftus / talon-ai-tools

Query LLMs and AI tools with voice commands
MIT License
39 stars 14 forks source link

add a smart clipboard command #69

Closed jaresty closed 3 weeks ago

pokey commented 1 month ago

Can you give an example of how you'd use this?

jaresty commented 1 month ago

This is inspired by this: https://research.google/blog/smart-paste-for-context-aware-adjustments-to-pasted-code/?utm_source=tldrnewsletter

The basic idea is to use it as a lightweight refactoring tool. LLMs are very good at making words fit together. I think that this may be used to transform signatures into types, to turn variables into arguments, and also for restructuring text. You can cut a sentence from one paragraph, select another paragraph and "model paste" and the LLLM should find the right place to insert into the paragraph and add transitions (etc).

jaresty commented 1 month ago

Here's a concrete example:

Write a comment describing what you want to change, or pseudo code a type definition (etc), then "model bring" some similar code to that spot and the LLM should make the changes you need.

C-Loftus commented 1 month ago

This is cool and thanks for submitting. However I wasn't sure how different this would be from our existing options. I wasn't sure either if solely the cursorless destination provides enough context to generate a quality response. I think in the Google research article they have more context perhaps?

Write a comment describing what you want to change, or pseudo code a type definition (etc), then "model bring" some similar code to that spot and the LLM should make the changes you need.

Is it possible already do similar things using model please and/or model generate code on a cursorless target?

jaresty commented 1 month ago

You can do the same but it is more verbose. This is very succinct. You can, for example, just write a single word and then use bring or paste to get the LLM to infer the transformation you intend. No worries if you don't want to accept the PR-I'm happy to just keep this for my own use instead, but I do think it has utility and I'm interested to experiment and see how it is used.

jaresty commented 1 month ago

Fwiw - I like to lean on LLMs to give feedback as well. When you aren't exactly sure how you want to change a thing, using "models please" would be harder than "model paste," imo. You can model paste to a block and it will try to figure out the best place to insert what you're trying to do.

jaresty commented 1 month ago

With respect to the amount of context-it doesn't need too much from the code base to be useful. "Model please" (and other commands) operate on limited context mostly drawing from the general corpus available to the LLM. This is basically doing the same but with the minor difference that it is intended to combine two chunks of text instead of modifying just one. (There is also a more general version of this where you can specify how you want them to be combined, but I didn't go that far with this)

C-Loftus commented 1 month ago

That's all fair. I am fine with the idea. I think it makes sense and I agree that model please or model generate code can sometimes add extra friction when chaining them together.

I do think that we might need to think a bit about the API. model paste is not super intuitive and the python function is acting like a clipboard, but isn't really using the clipboard in the function, so I think it could be renamed to be more agnostic.

I think this is worth merging but I'd just like to keep any talon/python relevant to this in the gpt-beta folder and change a few things first to make it super clear if that is fair with you.

jaresty commented 1 month ago

I'm definitely open to suggestions on changing it, and also fine with putting it in beta for now 😀

pokey commented 1 month ago

Oh I get it now. Interesting idea. Is it working well in practice?

jaresty commented 1 month ago

I'd say it's still experimental. I think it makes sense to put it into the beta flag-it works pretty well but I think I might want to tweak the prompt a bit and also I'm still figuring out how to use it. It feels pretty fun to use (experimental/kind of like play). I do think it's worth trying out.

jaresty commented 1 month ago

I think this is ready to go now.

jaresty commented 4 weeks ago

@pokey i left an open question on the PR. I think we're just waiting on your feedback now.

jaresty commented 3 weeks ago

I extracted a new helper for dealing with lists.

pokey commented 3 weeks ago

Ok looking good. My last question is why everything is anchored (the final $ on each command). I generally only anchor things that end with a prose capture. Any reason we shouldn't be able to chain these with other commands?

jaresty commented 3 weeks ago

No reason I can think of. I think I just copied that from somewhere-will fix.

C-Loftus commented 3 weeks ago

@jaresty I did a final refactor just to clean up a few things. You can merge once you are satisfied. (Perhaps just give this a final test run with some of your desired use cases to make sure I didn't regress anything)

Just as a note, I put any model blend commands into the beta folder and added a few comments as needed. I still think the API is still a bit tricky to intuit but we can iterate on it as we use it more.

jaresty commented 3 weeks ago

Fixed a small bug but now I'm going to merge it! thanks all!!!