magicalne / nvim.ai

Inspired by Zed AI, it allows you to chat with your buffers, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.
Apache License 2.0
121 stars 2 forks source link
copilot llm neovim-plugin nvim nvim-plugin

nvim.ai

nvim.ai is a powerful Neovim plugin that brings AI-assisted coding and chat capabilities directly into your favorite editor. Inspired by Zed AI, it allows you to chat with your buffers, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.

Chat with buffers

https://github.com/user-attachments/assets/5897f318-bf2c-4bd2-b4d3-51ce5b06d049

Inline Assist

https://github.com/user-attachments/assets/a4eeb475-c753-4f6e-9c41-71e21e636c6c

Set up context and ask LLM to generate code. Use inline assist to insert/rewrite the code.

/diagnostics

https://github.com/user-attachments/assets/d36abc9d-a81e-4b2e-9410-e7d538a3ed7f

Set up context with diagnostics from LSP.

Features

Install

vim-plug

Plug 'nvim-treesitter/nvim-treesitter', {'do': ':TSUpdate'}
Plug 'nvim-lua/plenary.nvim'
Plug 'magicalne/nvim.ai', {'branch': 'main'}

Lazy

-- Setup lazy.nvim
require("lazy").setup({
  spec = {
    {"nvim-treesitter/nvim-treesitter", build = ":TSUpdate"}, -- nvim.ai depends on treesitter
    {
      "magicalne/nvim.ai",
      dependencies = {
        "nvim-lua/plenary.nvim",
        "nvim-treesitter/nvim-treesitter",
      },
      opts = {
        provider = "anthropic", -- You can configure your provider, model or keymaps here.
      }
    },

  },
  -- ...
})

Config

You can find all the config and keymaps from here.

Ollama

local ai = require('ai')
ai.setup({
  provider = "ollama",
  ollama = {
    model = "llama3.1:70b", -- You can start with smaller one like `gemma2` or `llama3.1`
    --endpoint = "http://192.168.2.47:11434", -- In case you access ollama from another machine
  }
})

Others

Add you api keys to your dotfile

I put my keys in ~/.config/.env and source it in my .zshrc.

export ANTHROPIC_API_KEY=""
export CO_API_KEY=""
export GROQ_API_KEY=""
export DEEPSEEK_API_KEY=""
export MISTRAL_API_KEY=""
export GOOGLE_API_KEY=""
export HYPERBOLIC_API_KEY=""
export OPENROUTER_API_KEY=""
export FAST_API_KEY=""
export CEREBRAS_API_KEY=""
local ai = require('ai')
ai.setup({
  --provider = "snova",
  --provider = "hyperbolic",
  --provider = "cerebras",
  --provider = "gemini",
  --provider = "mistral",
  provider = "anthropic",
  --provider = "deepseek",
  --provider = "groq",
  --provider = "cohere",
})

OpenAI compatible API

Local LLM like llamacpp and koboldcpp

local ai = require('ai')
ai.setup({
  provider = "openai",
  openai = {
    ["local "] = true,
    model = "llama3.1:70b",
    endpoint = "http://localhost:8080",
  }
})

Default Keymaps

{
  -- ..
  -- Keymaps
  keymaps = {
    toggle          = "<leader>c", -- Toggle chat dialog
    send            = "<CR>",      -- Send message in normal mode
    close           = "q",         -- Close chat dialog
    clear           = "<C-l>",     -- Clear chat history
    stop_generate   = "<C-c>",     -- Stop generating
    previous_chat   = "<leader>[", -- Open previous chat from history
    next_chat       = "<leader>]", -- Open next chat from history
    inline_assist   = "<leader>i", -- Run InlineAssist command with prompt
  },
}

Chat

Inline Assist

Usage

Chat

The chat dialog is a special buffer. nvim.ai will parse the content with keywords. There are 3 roles in the buffer:

Here is an example:

/system You are an expert on lua and neovim plugin development.

/you

/buf 1: init.lua

How to blablabla?

/assistance:
...

Context-Aware Assistance

Inline Assist

By pressing leaderi and typing your instruction, you can insert or rewrite a code block anywhere in the current file. Note that the inline assist can read the chat messages in the sidebar. Therefore, you can ask the LLM about your code and instruct it to generate a new function. Then, you can insert this new function by running inline assist with the prompt: Insert the function.

Workflow with nvim.ai

The new way of working with nvim.ai is:

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgements

This project is inspired by:

License

nvim.ai is licensed under the Apache License. For more details, please refer to the LICENSE file.


⚠️ Note: This plugin is under active development. Features and usage may change.