karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.29k stars 128 forks source link

UX improvements #62

Closed bhankas closed 9 months ago

bhankas commented 1 year ago
karthink commented 1 year ago

Auto-scroll as the ChatGPT response gets longer than visible page

Good idea, thanks. I'll look into adding this. But note that this will be limited to the point where the prompt is at the top of the window, since gptel does not (and will not) move the point by default, and the window must always contain the point in Emacs.

Automatically move cursor to next heading (when using Org-mode) so as to make typing next query easier

See #36.

(add-hook 'gptel-post-response-hook #'markdown-outline-next)

Possibly store/query history of previous chats? (for feature parity with stock web interface)

There is no persistence right now, see #17 and #58 for previous discussion on this topic.

bhankas commented 1 year ago

How about scroll with the text as it gets added, but add a mark before leaving prompt, and when querying is done, move it back by popping mark? User can customize the jank on return by a variable.

The reason I ask is that, a lot of questions being asked to ChatGPT are of exploratory nature, particularly where Google falls hilariously short. I wouldn't speak for others, but if the response is smaller than the visible buffer, users won't notice anyway. But otherwise the responses are long, they often have explanatory paragraphs on top and bottom taking more space, and appear on screen slow enough that we are done processing the part that goes away during the scroll anyway. Just my 2c.

Suggestion from #36 did solve selecting next heading, thanks!

I did end up reading #17, and it looks like really good idea, but you know better :)

karthink commented 1 year ago

add a mark before leaving prompt, and when querying is done, move it back by popping mark?

This is what gptel does. Since any insertion into a buffer can only happen at (point), point is being continuously moved as the response is inserted. At the end it is moved back to where the user left it.

I could add a gptel-preserve-point variable that, when set to nil, allows the point to move with the response. Note that the active buffer can't be interacted with in an async manner any more (first demo video in Readme), since the user wouldn't be controlling point.

I did end up reading https://github.com/karthink/gptel/issues/17, and it looks like really good idea, but you know better :)

I'm on board with making chats persistent. The issue is left open since I haven't found the best way yet. I don't want to impose more syntax on the conversation.

karthink commented 1 year ago

gptel conversations in Org mode buffers can be saved to disk and resumed now.

Support for saving and restoring state has been added for Org mode buffers. Saving a gptel buffer to disk will save gptel metadata as Org properties. Opening an Org file and turning on gptel-mode will cause the state (if available in the file) to be restored, and the conversation can be continued.

See also M-x gptel-set-topic, which can be used to limit a conversation context to an Org heading.

Support for Markdown mode is pending.

karthink commented 1 year ago

Auto-scroll as the ChatGPT response gets longer than visible page

This has proved trickier to implement because of the async requirements. However there are now workarounds discussed in #92.

karthink commented 9 months ago
  • Auto-scroll as the ChatGPT response gets longer than visible page

Use gptel-pre-response-hook to move the insertion point to the top of the window. Scrolling the window violates the async expectation.

  • Automatically move cursor to next heading (when using Org-mode) so as to make typing next query easier

Use gptel-post-response-hook to move the cursor wherever you need it. The next heading isn't always inserted (for example, if gptel-prompt-prefix-alist is customized) so gptel cannot provide a built-in option.

  • Possibly store/query history of previous chats? (for feature parity with stock web interface)

With gptel-mode turned on (dedicated gptel buffers), each Org/Markdown file can store one chat history persistently. The chat state is saved automatically when saving the buffer to a file.

karthink commented 9 months ago

Auto-scroll as the ChatGPT response gets longer than visible page

I added auto-scrolling, see README. Note that if enabled, this moves the cursor and sort of breaks async behavior.