Closed eliasericsson closed 2 months ago
Thanks for your kind words!
I think you, as many, are using the plugin as a floating window. I set this as the default behaviour, because it works well for the demo, and is the least obstructive to people's existing workflow in that it will sit on top of any current splits.
A much better way to use it, is to have it open in a split, so you can cross reference the answer with the source code.
I might reconsider the default setting on this matter.
However, you're right, a way to recall past answers would be cool, and fairly easily achievable via a Telescope integration.
In regards to the markers, that would be a nice extra - the question would be when they initially appear and disappear though. https://github.com/jackMort/ChatGPT.nvim does something similar where it only shows markers whilst awaiting a response, but then are then hidden as soon as the popup appears. Given that this plugin can also appear in splits, that you might keep open for some time, I'm not sure leaving the markers in place until you close the window might be to everyones taste. Even briefly showing it could be a nice way to show a loading state though, especially for those who have not added it to their status line.
Often the results mention the lines queried, I could also make this more explicit in the prompt.
Let me know what you think, I've already started on the Telescope integration 😉
Hi again and please excuse the delay. Setting popup_type = "vertical"
is definitely a nice way to circumvent this issue, I've changed my config now. Having a response history that can be viewed with Telescope would be fantastic!
Added in https://github.com/piersolenski/wtf.nvim/pull/28
Check out WtfHistory
and WtfGrepHistory
in the updated README.
Hi, and thanks for a cool plug-in!
It would be useful to be able to recall the response from OpenAI as the window might be closed prematurely, or perhaps more likely because I forgot what it said. If it was possible to recall the response that worry be alleviated. It would also save some OpenAI-credits, which would be nice!
Extra: If it was possible to store markers in the gutter to mark where the plug-in was called upon and from there open the given response, that might be useful too.