aaronik / GPTModels.nvim

GPTModels - a multi model, window based LLM AI plugin for neovim, with an emphasis on stability and clean code
MIT License
54 stars 2 forks source link

Add option to control available models in the plugin setup #4

Closed brunobmello25 closed 3 weeks ago

brunobmello25 commented 1 month ago

First of all, I wanted to say that this plugin is great! best AI interaction experience I've had so far with neovim. definitely a keeper <3

One small problem that I'm having is that I don't have ollama setup in my machine: I only use openai API with gpt-4o-mini. Problem is that the first time I open this plugin in a neovim session it will default to ollama.llama3.1:latest, which hangs until I press to cancel the request. It hangs because I don't have ollama setup

Since I don't plan on using anything ollama related for now, it would be nice to be able to disable/enable specific models in the plugin setup. Maybe by adding a setup key that receives a table with enabled models?

I'm thinking of an experience like this:

-- with lazy.nvim
{
  'Aaronik/GPTModels.nvim',
  dependencies = {
    'MunifTanjim/nui.nvim',
    'nvim-telescope/telescope.nvim',
  },
  config = function()
    require('gptmodels').setup({
      models = {
        'gpt-3.5-turbo',
        'gpt-4o-mini',
        'gpt-4o'
      },
    })
  end
}

Do you think this would this be possible? If so, I'm happy to give it a try and open a PR!

aaronik commented 1 month ago

Oh snap, I remember being worried about that happening at one point. Fears realized 😱 I'd forgotten about it! Glad you're enjoying the experience anyways though!

Actually I believe this ticket is kind of the mirror image of the other ticket about not showing OpenAI models (but with the added bummer of locking it up). Both of which I'm in process already of fixing.

Basically the way it originally worked is I just hard coded available models. I did this because the plugin used ctl-j/k to scroll through models, and having more than 5 or 8 made that cumbersome. But then I added ctl-p with a telescope window, and now that's all I use, and it works great no matter how many models are available. So now I'm going to move away from all hard coded models and just fetch available models and show those.

I believe there will still need to be a "preferred model", which I will default to an ollama model, but if you don't have ollama running, it won't show at all and it shouldn't affect you.

Anyways I'm working on this now - no guarantees when it'll land, but it's in the works.

Thanks so much for your kind words and for using the plugin! I'll update here when these changes land 🙂👍

brunobmello25 commented 1 month ago

I believe there will still need to be a "preferred model", which I will default to an ollama model, but if you don't have ollama running, it won't show at all and it shouldn't affect you.

That's great to hear! do you think it's possible to make it configurable in the plugin setup? this way I can default to gpt-4o-mini and avoid accidentally burning my api credits with gpt-4o 😅

Anyways I'm working on this now - no guarantees when it'll land, but it's in the works.

No problem! let me know if there's anything I can do to help 😊

aaronik commented 1 month ago

That's great to hear! do you think it's possible to make it configurable in the plugin setup? this way I can default to gpt-4o-mini and avoid accidentally burning my api credits with gpt-4o 😅

So far I think the answer is broadly no - I don't want to make it configurable, because currently there is no configuration, and I'm feeling kind of attached to that for now :D

But good news for you because I plan on defaulting it to 4o-mini anyways, hah! I've noticed that openai often has a model that just feels like the right one for a plugin like this to default to, so I plan on keeping the default hard coded and updating it whenever openai releases that new best default. That way only one person has to keep up with it, instead of everyone in their own configs :)

No problem! let me know if there's anything I can do to help 😊

Right on, thank you so much!

hdemers commented 4 weeks ago

Chiming in as I've been following this thread, having the same issue.

In that case, you might want to improve the error handling when the default model is not found. Currently, we get a message in the main output window. At first, that's okay because as a new answer comes in, the error message gets overwritten. The problem arise when we re-open the window to e.g. re-read the answer: the error message appears again and we loose the previous answer we had.

As you can see below, the previously selected model correctly shows up in the title of the window, but I still get the ollama not found error and my previous answer is gone.

image

aaronik commented 4 weeks ago

Thanks for the addition @hdemers - the problem you describe is covered in the upcoming fix. Hang tight, and thanks for your patience.

aaronik commented 3 weeks ago

Updates!

@hdemers - Your problem should be fixed! Now instead of telling you there are missing deps every time you open it by overwriting the contents of your pane, there's a one time notification of a missing optional dependency. lmk if that works for you 👍

@brunobmello25 - ~Yours isn't totally fixed yet, but hopefully it's alleviated. It still defaults to an ollama model, but ctrl-j/k should work fine now despite that. My next task is to fix that default.~ Got that sorted out, so now you should no longer be (terribly) troubled by not having ollama. Same ask - lmk if that works for you please!

@pillzu - These changes are prerequisite to get the openai models out of there as well when unused - it's a long road but we're walking it :P

hdemers commented 3 weeks ago

Nice. I just tested it and now more error message. Thanks for that!

aaronik commented 3 weeks ago

@hdemers ~You being sarcastic? Did that fix solve your issue?~ Oh, I think I misunderstood you, because I did add error messages, lol. But you probably meant "and no more error message", yeah?

When I thought you were talking about the new error message, I did start to realize it should be less intense, hah, and I do feel that way even if you did just make a typo. So I reduced the intensity of the optional dependency notification. It's a smoother experience now I think.

hdemers commented 3 weeks ago

Oh so sorry for the confusion. It totally was a typo: no more error messages.

brunobmello25 commented 3 weeks ago

@Aaronik working great over here as well!

aaronik commented 3 weeks ago

Fantastic! Then I'm closing this out. Thanks for opening the issue and please open more with any further problems you have with the plugin!