Bryley / neoai.nvim

Neovim plugin for intracting with GPT models from OpenAI
MIT License
564 stars 51 forks source link

Idea: Add option to use a local model like GPT4ALL #44

Open dhazel opened 1 year ago

dhazel commented 1 year ago

Thank you for the great plugin!

The option to use a local model like GPT4ALL instead of GPT-4 could make the prompts more cost effective to play with.

See codexplain.nvim for an example plugin that is doing this.

gerazov commented 1 year ago

This would be a great addition for the plugin :+1:

It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.

walkabout21 commented 1 year ago

hfcc.nvim has an interface for a hosted open assistant model at huggingface. It doesn't have as robust of a feature set so it would be great if huggingface chat could be leveraged with this plugin.

thegatsbylofiexperience commented 1 year ago

So there is a way to use llama.cpp with the openai api... if one could add a different URI for the openai endpoint we would be in business.... [https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/]

shnee commented 1 year ago

I came here looking to see if this plugin could be used with llama.cpp.

Perhaps making this URL in openai.lua configurable would just work?

utils.exec("curl", {
        "--silent",
        "--show-error",
        "--no-buffer",
        "https://api.openai.com/v1/chat/completions",
        "-H",
        "Content-Type: application/json",
        "-H",
        "Authorization: Bearer " .. api_key,
        "-d",
        vim.json.encode(data),
    }
Budali11 commented 3 months ago

This would be a great addition for the plugin 👍

It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.

agree