Open championswimmer opened 2 months ago
When the selected code is selected, is it better to automatically add the selected code during the session
@championswimmer curious to understand why this over ctrl+L? I can see the scenario perhaps where it reduces the necessary number of keystrokes/mouse movements
@sestinj This might be a habit issue. Tools like GitHub Copilot and Cody typically only require selecting the code. So, could automatically add the selected code to the context when code is selected?
Quite some time ago we automatically included the selected context but overwhelmingly got feedback that this made it difficult to select more than one range, and that many users frequently highlight code as they are reading it, which causes a lot of unnecessary flashing in the input box. Right now we're going to stick with using cmd+L, but if we hear enough feedback that there's a clear improvement to make, we'll definitely be willing to change!
In the meantime I do think the @ selection idea is a good one
Hello!
I missed the same feature (including currently selected content in chat) in Continue. In this way we can precisely provide most relevant code to LLM, which I certainly am more sure than letting LLM itself to pick out from provided whole file. And in my usecase a large percent of requests are in the following process: when I select relevant code and ask my question without any instruction mark, the plugin I use (ali tongyilingma most, which is based on Qwen LLM) will automatically insert selected code below my question. I feel this very convenient, though in few cases I do wrongly select irrelevant code, but it does not make more trouble than convenience it brings. So I think it's worth supporting implicit insertion of selected code into chat (without explicit @selection mark), or maybe add an option to let user choose?
Regarding providing multiple selection code ranges, I think this is a useful feature, though maybe not so frequently used. It can solve some problem, without which it would be hard or even impossible to instruct LLM to do thing right. (because providing too few or too much information). I think multiple selection is most useful when:
Regarding how to add single or multiple selected code range into chat, I agree @selection
is a good way. when input @selection in chat window, maybe the plugin can record current selected code range inline, just like follows:
I defined XXX in following code:
@selection
file1.py: 25~50, but it reports YYY problem in following code:@selection
file2.py: 100~120. Help me check what's wrong.
in which file1.py: 25~50 and file2.py: 100~120 is automatically extracted from selection range at the moment @selection
is input into chat box. And when chat content is sent to LLM, the plugin automatially replace @selection
marks with corresponding code.
It may seem a little cumbersome, but I think it's useful when needed, just like cherrypicking in git.
Validations
Problem
The
@code
context requires selecting filesI was thinking of having a
@selection
context that will provide the code currently selectedSolution
No response