Robitx / gp.nvim

Gp.nvim (GPT prompt) Neovim AI plugin: ChatGPT sessions & Instructable text/code operations & Speech to text [OpenAI]
MIT License
537 stars 50 forks source link

Assistants and better workflow #49

Open teocns opened 8 months ago

teocns commented 8 months ago

Hey there, I was trying to search for a suitable plugin to adopt, but couldn't find any yet.

First of all congrats for your product!

These are some of the features that I'd like see implemented in an ideal plugin, especially with the latest OpenAI Assistant API...

At the very basic I'd fancy a general dashboard that shows the current assistant in use, the thread (if any), and well, the chat box.

Do you think this is something you might feel a spark of interest for? I'm also open to contribute

Robitx commented 8 months ago

@teocns Hey, I'll have to look into Assistant API in detail, but from what I've seen so far I certainly see the benefits. Tool use, working with non textual files (pdf, imgs) and providing them as a context, switching between assistants in a thread..

I also see the negatives, the assistant doesn't support streaming output yet which kinda sucks from user perspective - you hit query and wait an unknown amount of time. Plus even greater vendor lock in and some people might not like storing persistently their data/code in OpenAI cloud.

Concerning the points mentioned:

teocns commented 7 months ago

Thanks @Robitx, indeed many of the features that you described, and I personally had the chance to test, complement my workflow seamlessly.

However I do have to mention that I experience serious insert-mode typing lag only after the conversation was initiated and a response from GPT was generated. For reference, I use Astronvim; I have tried disabling syntax highlight and LSP - anything that comes in your mind intuitively that you consider might be causing this issue?

Note: the issue won't persist by restarting nvim and reopening the same chat buffer, but will occur after the next message.

Robitx commented 7 months ago

@teocns well after the first GPT response in the chat buffer, there is a check if the chat Title is unset (# topic: ?) and if so makes a second GPT call asking it to generate the topic. Can you check if it corresponds to the lag you're observing?

Otherwise if you have public nvim configuration (or if you're willing to provide it to me privately) I can check the behavior and try to debug it.

teocns commented 7 months ago

I can confirm # topic isn't the issue.

Here's my astronvim user config, if you're willing to give it a chance. This contains the user custom configuration - you will need to setup astronvim aside, and then place this configuration under the user directory.

teocns commented 7 months ago

I moved this to #53

teocns commented 7 months ago

I would like to move a bit ahead with the Assistants topic

I also see the negatives, the assistant doesn't support streaming output yet which kinda sucks from user perspective - you hit query and wait an unknown amount of time. Plus even greater vendor lock in and some people might not like storing persistently their data/code in OpenAI cloud.

I personally see output streaming as a more UI-breaking feature - neither ChatGPT's GUI managed to yet get a comfortable UX on the cursor side while the stream is running.

On another note, I was taking a look at pynvim to integrate it with the Assistants API. Now, this is an idea from a non-experienced vim user, and I can intuitively recognise that tying this feature to Python can come with portability and other issues. Hence, what's your take? Plain HTTP API, or Python SDK?

Robitx commented 7 months ago

@teocns less dependencies means less breaking points outside of your control, so personally I would go with Plain HTTP API (within reason, depends on amount of complexity hidden by SDK, but these days GPT can help with migration).