piersolenski / wtf.nvim

Delicious diagnostic debugging in Neovim 🤤
400 stars 9 forks source link

Suggestion: refactor to stop using vim.g #6

Closed catgoose closed 1 year ago

catgoose commented 1 year ago

I think it is bad practice for nvim plugins to use global variables to store state/config. I would be willing to work on a PR to correct this, but I wanted to know if that would be acceptable before I spent time doing that.

Thanks, I really like this plugin btw!

piersolenski commented 1 year ago

Hey, thanks for the feedback! I totally agree, I wrote the initial version in a bit of hurry! I'm currently working on support for visual mode, which will allow you to not just send lines but whole blocks of code for diagnosis. It refactors, amongst other things, use of global variables into a config. Feel free to see what you think of my implementation so far https://github.com/piersolenski/wtf.nvim/tree/feature/visual-mode - happy to add you as a reviewer if you're interested.

catgoose commented 1 year ago

Sure sounds good to me. Yeah the way you are getting the visual selection is how I have done it.

How do you feel about allowing users to to supply a notify callback, which defaults to vim.notify?

Something like

{
    openai_model_id = "gpt-3.5-turbo",
...
    notify = function(msg)
      require("notify").notify(msg, vim.log.levels.INFO, {
        title = "OpenAI",
      })
    end,
...
}

Also, if you are sending code to chatGPT maybe you can set a default token limit so you aren't met with errors if you are trying to send too many tokens at once?

https://github.com/jackMort/ChatGPT.nvim/blob/fba5423b3ddf0b67ada2bbb4d5f66d9cf76c996a/lua/chatgpt/flows/chat/tokens.lua#L15-L32

piersolenski commented 1 year ago

The token idea is great. I'm about to merge https://github.com/piersolenski/wtf.nvim/tree/feature/visual-mode - if I open a new issue for the tokens, would you be interested in implementing it?

As for notifications, I've replaced any usage of print with vim.notify, so nvim-notify can now pick those up by default. If anything I was thinking about having a way to silence it, in case it becomes annoying or people are relying more on the spinner functionality for feedback. We could make notify a string, in order to facilitate custom messages, or if set to nil, it's silenced?

People can always use the hooks/callbacks for more advanced functionality.

catgoose commented 1 year ago

I can look at it this weekend probably. But I think the best we can do is a token estimate. I've tried a few token encoders including https://platform.openai.com/tokenizer and I get different results from each of them. No telling what openai actually considers a token once it's processed by the LLM.

piersolenski commented 1 year ago

Great, in the meantime I'll close this issue!