karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.14k stars 119 forks source link

Feature request: If nothing is selected use the current line as prompt #19

Closed tmalsburg closed 1 year ago

karthink commented 1 year ago

I'm assuming you mean this in non-dedicated buffers, for example in a buffer with some code?

If nothing is selected it already uses the contents of the buffer until (point) as the prompt. This includes identifying past responses as responses. The readme suggests selecting a region first to limit the text that's used as the prompt, since you may not want to send everything from the beginning of the buffer to the cursor position.

However I don't see the point of sending just the current line by default. Could you explain the advantage of this behavior?

tmalsburg commented 1 year ago

Thanks for the response. From the readme it wasn't clear to me that the behavior for no selection is already defined.

I'm assuming you mean this in non-dedicated buffers, for example in a buffer with some code?

Yes, sorry should have added more detail.

However I don't see the point of sending just the current line by default. Could you explain the advantage of this behavior?

The previous part of the buffer up to point it often too much context and irrelevant. For my personal use cases the current line is often enough as a prompt. But I should say, that I'm not primarily using it for code but for text. When writing text, I use visual-line-mode and a buffer line can be a whole paragraph.

By the way, I noticed that sometimes existing text is deleted when GPT responses are inserted in the buffer. But it seemed a bit random and I wanted to get a better understanding of what's going on before creating an issue for that.

karthink commented 1 year ago

From the readme it wasn't clear to me that the behavior for no selection is already defined.

Yes, this is intentional since the behavior with no selection is not finalized yet.

The previous part of the buffer up to point it often too much context and irrelevant.

I agree generally, but it depends on how you're using ChatGPT here. With the current behavior you can have a continuous conversation in any buffer, not just a dedicated one created with M-x gptel. I haven't found the right tradeoff yet. I'll try experimenting with this behavior for a bit.

karthink commented 1 year ago

By the way, I noticed that sometimes existing text is deleted when GPT responses are inserted in the buffer. But it seemed a bit random and I wanted to get a better understanding of what's going on before creating an issue for that.

If this is the case, it's a bug. The desired behavior is to always insert the response below the current line.

Does it happen in dedicated gptel buffers, or in other buffers when you select a region, or both?

tmalsburg commented 1 year ago

Does it happen in dedicated gptel buffers, or in other buffers when you select a region, or both?

I actually haven't used the dedicated buffer yet, so I don't know whether it happens there. But it does happen in other buffers. The buffer in which it happened was an org mode buffer and the GPT response replaced a subsequent org section header. If I find time I will try to come up with a reproducible example and create another issue.

karthink commented 1 year ago

@tmalsburg Please see the wiki for options. It should be easy to get this behavior using gptel-request. I don't think I'll be adding sending the current line as a default, it doesn't fit with the rest of the design.