nvim.ai
is a powerful Neovim plugin that brings AI-assisted coding and chat capabilities directly into your favorite editor. Inspired by Zed AI, it allows you to chat with your buffers
, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.
https://github.com/user-attachments/assets/5897f318-bf2c-4bd2-b4d3-51ce5b06d049
https://github.com/user-attachments/assets/a4eeb475-c753-4f6e-9c41-71e21e636c6c
Set up context and ask LLM to generate code. Use inline assist to insert/rewrite the code.
https://github.com/user-attachments/assets/d36abc9d-a81e-4b2e-9410-e7d538a3ed7f
Set up context with diagnostics from LSP.
Plug 'nvim-treesitter/nvim-treesitter', {'do': ':TSUpdate'}
Plug 'nvim-lua/plenary.nvim'
Plug 'magicalne/nvim.ai', {'branch': 'main'}
-- Setup lazy.nvim
require("lazy").setup({
spec = {
{"nvim-treesitter/nvim-treesitter", build = ":TSUpdate"}, -- nvim.ai depends on treesitter
{
"magicalne/nvim.ai",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
},
opts = {
provider = "anthropic", -- You can configure your provider, model or keymaps here.
}
},
},
-- ...
})
You can find all the config and keymaps from here.
local ai = require('ai')
ai.setup({
provider = "ollama",
ollama = {
model = "llama3.1:70b", -- You can start with smaller one like `gemma2` or `llama3.1`
--endpoint = "http://192.168.2.47:11434", -- In case you access ollama from another machine
}
})
I put my keys in ~/.config/.env
and source
it in my .zshrc
.
export ANTHROPIC_API_KEY=""
export CO_API_KEY=""
export GROQ_API_KEY=""
export DEEPSEEK_API_KEY=""
export MISTRAL_API_KEY=""
export GOOGLE_API_KEY=""
export HYPERBOLIC_API_KEY=""
export OPENROUTER_API_KEY=""
export FAST_API_KEY=""
export CEREBRAS_API_KEY=""
local ai = require('ai')
ai.setup({
--provider = "snova",
--provider = "hyperbolic",
--provider = "cerebras",
--provider = "gemini",
--provider = "mistral",
provider = "anthropic",
--provider = "deepseek",
--provider = "groq",
--provider = "cohere",
})
llamacpp
and koboldcpp
local ai = require('ai')
ai.setup({
provider = "openai",
openai = {
["local "] = true,
model = "llama3.1:70b",
endpoint = "http://localhost:8080",
}
})
{
-- ..
-- Keymaps
keymaps = {
toggle = "<leader>c", -- Toggle chat dialog
send = "<CR>", -- Send message in normal mode
close = "q", -- Close chat dialog
clear = "<C-l>", -- Clear chat history
stop_generate = "<C-c>", -- Stop generating
previous_chat = "<leader>[", -- Open previous chat from history
next_chat = "<leader>]", -- Open next chat from history
inline_assist = "<leader>i", -- Run InlineAssist command with prompt
},
}
The chat dialog is a special buffer. nvim.ai
will parse the content with keywords. There are 3 roles in the buffer:
/system your_system_prompt
in the first line./buf {bufnr}
Enter
in normal mode./you
will be treated as the prompt.
Just like Zed AI, this feature is called "chat with context." You can edit the last prompt if you don't like the response, and you can do this back and forth.Here is an example:
/system You are an expert on lua and neovim plugin development.
/you
/buf 1: init.lua
How to blablabla?
/assistance:
...
By pressing leaderi and typing your instruction, you can insert or rewrite a code block anywhere in the current file.
Note that the inline assist
can read the chat messages in the sidebar. Therefore, you can ask the LLM about your code and instruct it to generate a new function. Then, you can insert this new function by running inline assist
with the prompt: Insert the function
.
The new way of working with nvim.ai
is:
inline assist
.Contributions are welcome! Please feel free to submit a Pull Request.
This project is inspired by:
nvim.ai is licensed under the Apache License. For more details, please refer to the LICENSE file.
⚠️ Note: This plugin is under active development. Features and usage may change.