SilasMarvin / lsp-ai

LSP-AI is an open-source language server that serves as a backend for AI-powered functionality, designed to assist and empower software engineers, not replace them.
MIT License
1.82k stars 55 forks source link

Neovim Plugin and nvim-lspconfig integration #2

Open Nold360 opened 3 weeks ago

Nold360 commented 3 weeks ago

Hi,

I really would love to test this with neovim, but i have no idea how to setup a custom LSP. Maybe using nvim-lspconfig

SilasMarvin commented 3 weeks ago

Hey @Nold360 . We were just having this discussion on Reddit yesterday and will hopefully have some example configurations in the repository and make a pr into nvim-lspconfig soon.

Here is an answer provided by Microbzz on reddit:

local lsp_ai_config = {
  -- Uncomment if using nvim-cmp
  -- capabilities = require('cmp_nvim_lsp').default_capabilities(),
  cmd = { 'lsp-ai' },
  root_dir = vim.loop.cwd(),
  init_options = {
    memory = {
      file_store = {}
    },
    models = {
      model1 = {
        type = "llama_cpp",
        repository = "mmnga/codegemma-1.1-2b-gguf",
        name = "codegemma-1.1-2b-Q8_0.gguf",
        n_ctx = 2048,
        n_gpu_layers = 999
      }
    },
    completion = {
      model = "model1",
      parameters = {
        fim = {
          start = "<|fim_prefix|>",
          middle = "<|fim_suffix|>",
          ["end"] = "<|fim_middle|>"
        },
        max_context = 2000,
        max_new_tokens = 32
      }
    }
  },
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

You can swap out the value of init_options with whatever configuration you prefer. See the configuration section of the wiki for more info.

There is still an open discussion around getting ghost text working and potentially shipping our own neovim plugin for automatic inline completion.

Myzel394 commented 2 weeks ago

For those of you who want a very rough pow you can use this snippet. This will use ~ClosedAI~ OpenAI's chat completion and can be called using <leader>co. This is pretty much just a work in progress, but maybe it will help someone :)

local lsp_ai_config = {
  -- Uncomment if using nvim-cmp
  -- capabilities = require('cmp_nvim_lsp').default_capabilities(),
  cmd = { 'lsp-ai' },
  root_dir = vim.loop.cwd(),
  init_options = {
    memory = {
      file_store = {}
    },
    models = {
      model1 = {
          type = "open_ai",
          chat_endpoint = "https://api.openai.com/v1/chat/completions",
          model = "gpt-4-1106-preview",
          auth_token_env_var_name = "OPENAI_API_KEY",
      }
    },
    completion = {
      model = "model1",
      parameters = {
          max_context = 2048,
          max_new_tokens = 128,
            messages = {
            {
              role = "system",
              content = "You are a chat completion system like GitHub Copilot. You will be given a context and a code snippet. You should generate a response that is a continuation of the context and code snippet."
            },
            {
              role = "user",
              content = "Context: {CONTEXT} - Code: {CODE}"
            }
          }
      }
    }
  },
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
    end,
    {
        noremap = true,
    }
)

I'd definitely wish a ghost-like text, just like copilot.vim does it. I'm not too familiar with LSPs, but #5 could be related to this

SilasMarvin commented 2 weeks ago

For those of you who want a very rough pow you can use this snippet. This will use ~ClosedAI~ OpenAI's chat completion and can be called using <leader>co. This is pretty much just a work in progress, but maybe it will help someone :)

local lsp_ai_config = {
  -- Uncomment if using nvim-cmp
  -- capabilities = require('cmp_nvim_lsp').default_capabilities(),
  cmd = { 'lsp-ai' },
  root_dir = vim.loop.cwd(),
  init_options = {
    memory = {
      file_store = {}
    },
    models = {
      model1 = {
          type = "open_ai",
          chat_endpoint = "https://api.openai.com/v1/chat/completions",
          model = "gpt-4-1106-preview",
          auth_token_env_var_name = "OPENAI_API_KEY",
      }
    },
    completion = {
      model = "model1",
      parameters = {
          max_context = 2048,
          max_new_tokens = 128,
            messages = {
            {
              role = "system",
              content = "You are a chat completion system like GitHub Copilot. You will be given a context and a code snippet. You should generate a response that is a continuation of the context and code snippet."
            },
            {
              role = "user",
              content = "Context: {CONTEXT} - Code: {CODE}"
            }
          }
      }
    }
  },
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
    end,
    {
        noremap = true,
    }
)

I'd definitely wish a ghost-like text, just like copilot.vim does it. I'm not too familiar with LSPs, but #5 could be related to this

Thank you for sharing this! To integrate fully with Neovim and provide good inline completion with ghost text I think we will need to write our own plugin. Write now it will pretty much mimic the functionality of copilot.vim but have more support for different backends for completion. This will change as we add new supported features to LSP-AI that we want Neovim to take advantage of like chatting with your code and semantic search over your code base.

If anyone sees this and is interested in writing a Neovim plugin, feel free to do it! I'm happy to help however I can. Our VS Code plugin is a really good place to start for the kind of functionality it should provide: https://github.com/SilasMarvin/lsp-ai/blob/main/editors/vscode/src/index.ts

Robzz commented 2 weeks ago

nvim-cmp just merged support for multi-line ghost text, so a cmp based setup should now be quite viable. I'll play with it some more and see if I can get a decent example config going.

Update: yeah that works, the main issue right now is that the window containing the completion is drawn below the cursor which hides the ghost text on the following lines, but there's a PR (#1955) addressing it. I'll play with that branch a bit. Update 2: yeah it's not perfect, the windows does not always go above the cursor, but it kinda works. The first character of the prediction is also not displayed in ghost text, not sure if a config problem or a cmp bug. Anyway, it looks like this: Screenshot_20240610_145120

Myzel394 commented 2 weeks ago

@Robzz we'd love to see how you did that! :)

Robzz commented 2 weeks ago

I'll be opening a draft PR in a bit. I would not recommend merging it until the default config is integrated in nvim-lspconfig (managing the LSP lifecycle by hand is annoying and exactly what nvim-lspconfig is here for) but at least it should give a place to point the more adventurous people that want to try it right now.

Edit: PR up, see #17

SilasMarvin commented 2 weeks ago

I'll be opening a draft PR in a bit. I would not recommend merging it until the default config is integrated in nvim-lspconfig (managing the LSP lifecycle by hand is annoying and exactly what nvim-lspconfig is here for) but at least it should give a place to point the more adventurous people that want to try it right now.

This is awesome!

One thing I'm still unsure of is how to handle default configs. Right now if a user passes empty initializationOptions to LSP-AI we error. We require they provide a memory object and models array.

We could absolutely provide a default on the server for memory, but its tough choosing the default models array as there are many options for model backends, and all are hardware or api key dependent.

For the VS Code plugin, I thought that making OpenAI with gpt-4o the default in the plugin settings would be a reasonable choice, but it honestly wasn't my favorite as it still requires users to set an OPENAI_API_KEY for the plugin to work.

We want to make it as easy as possible for everyone to get started using LSP-AI but I think it requires they make some initial decision on at least what backend they want to use which brings me back to being unsure on how to implement any default config.

Myzel394 commented 2 weeks ago

We want to make it as easy as possible for everyone to get started using LSP-AI but I think it requires they make some initial decision on at least what backend they want to use which brings me back to being unsure on how to implement any default config.

... and ...

For the VS Code plugin, I thought that making OpenAI with gpt-4o the default in the plugin settings would be a reasonable choice, but it honestly wasn't my favorite as it still requires users to set an OPENAI_API_KEY for the plugin to work.

I think this is the best case for a default config. It's much more likely that a user will expose an OPENAI_API_KEY than have set up a local LLM already; especially we wouldn't know what model they're running. So I'm in favor of using openai as the default config.

SilasMarvin commented 2 weeks ago

We want to make it as easy as possible for everyone to get started using LSP-AI but I think it requires they make some initial decision on at least what backend they want to use which brings me back to being unsure on how to implement any default config.

... and ...

For the VS Code plugin, I thought that making OpenAI with gpt-4o the default in the plugin settings would be a reasonable choice, but it honestly wasn't my favorite as it still requires users to set an OPENAI_API_KEY for the plugin to work.

I think this is the best case for a default config. It's much more likely that a user will expose an OPENAI_API_KEY than have set up a local LLM already; especially we wouldn't know what model they're running. So I'm in favor of using openai as the default config.

I think you are probably right here. I do think they shouldn't be defaults on the server, they should be defaults for the config / plugin to send to the server. I don't want to have defaults for the server that might expose users codebases to third parties.

Robzz commented 2 weeks ago

I think you are probably right here. I do think they shouldn't be defaults on the server, they should be defaults for the config / plugin to send to the server. I don't want to have defaults for the server that might expose users codebases to third parties.

Agree'd. I think ideally the LSP part and LSP/IDE glue should have sensible defaults, but the backend/model config should probably be left for the user to pick. So as far as I can tell, that leaves only the memory key, which accepts only an empty file store for now ? In this case aside from registering the textDocument/generation action with nvim, there wouldn't be that much to do.

SilasMarvin commented 2 weeks ago

I think you are probably right here. I do think they shouldn't be defaults on the server, they should be defaults for the config / plugin to send to the server. I don't want to have defaults for the server that might expose users codebases to third parties.

Agree'd. I think ideally the LSP part and LSP/IDE glue should have sensible defaults, but the backend/model config should probably be left for the user to pick. So as far as I can tell, that leaves only the memory key, which accepts only an empty file store for now ? In this case aside from registering the textDocument/generation action with nvim, there wouldn't be that much to do.

Yes we can make the memory key have a default but I agree the model config should be controlled by the user. I just checked out your fork of nvim-lspconfig. Thank you for making that. I think for the init_options if we want to provide OpenAI gpt-4o as the default model we can use the example OpenAI chat config I have in the configuration section of the wiki: https://github.com/SilasMarvin/lsp-ai/wiki/Configuration#chat-3

Or maybe I am misunderstanding what is standard for nvim-lspconfig and its ok to provide a default that doesn't full work without the user providing more parameters?

Robzz commented 2 weeks ago

Or maybe I am misunderstanding what is standard for nvim-lspconfig and its ok to provide a default that doesn't full work without the user providing more parameters?

This I really don't know, maybe there are similar cases among the supported LSP servers in nvim-lspconfig, I haven't seen any but I only glanced over it and it's a long list. I haven't seen any AI LSP servers in there either since HuggingFace have gone the way of writing their own plugin instead for llm-ls so there's really no one to copy from. Asking on the nvim Matrix server is probably the easiest way to know for sure.

SilasMarvin commented 2 weeks ago

Or maybe I am misunderstanding what is standard for nvim-lspconfig and its ok to provide a default that doesn't full work without the user providing more parameters?

This I really don't know, maybe there are similar cases among the supported LSP servers in nvim-lspconfig, I haven't seen any but I only glanced over it and it's a long list. I haven't seen any AI LSP servers in there either since HuggingFace have gone the way of writing their own plugin instead for llm-ls so there's really no one to copy from. Asking on the nvim Matrix server is probably the easiest way to know for sure.

Got it. I'll ask in the matrix server.

We could do the same thing llm-ls does and just fork the llm-ls neovim plugin. This would provide a better user experience for completions. I know eventually we do want our own plugin.

We could also have both?

AlejandroSuero commented 2 weeks ago

For "ghost text" I am working on features for supermaven-nvim and we use something like this:

I didn't implement this feature so I am not gonna say I completely get it but when creating the autocmd adding a namespace like lsp-ai for example and when getting the result add it to a api-extended-marks, RTFM for more information on that.

local augroup = vim.api.nvim_create_augroup("lsp-ai", { clear = true })
local ns_id = vim.api.nvim_create_namespace("lsp-ai")
local opts = {
  id = 1,
  hl_mode = "combine",
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  group = augroup,
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
        vim.api.nvim_buf_set_extmark(0, ns_id, vim.fn.line(".") - 1, vim.fn.col(".") - 1, opts)
    end,
    {
        noremap = true,
    }
)

[!WARNING]

Not sure this works as expected for ghost text or how the result comes in. So you may have to modified and play around a bit.

Robzz commented 2 weeks ago

Got it. I'll ask in the matrix server.

Ok great!

We could also have both?

Yes, in my understanding it's not unusual in the nvim ecosystem to have an additional plugin for language server specific features while keeping the minimal config in the lspconfig plugin, it's even the official recommendation.

SilasMarvin commented 2 weeks ago

For "ghost text" I am working on features for supermaven-nvim and we use something like this:

I didn't implement this feature so I am not gonna say I completely get it but when creating the autocmd adding a namespace like lsp-ai for example and when getting the result add it to a api-extended-marks, RTFM for more information on that.

local augroup = vim.api.nvim_create_augroup("lsp-ai", { clear = true })
local ns_id = vim.api.nvim_create_namespace("lsp-ai")
local opts = {
  id = 1,
  hl_mode = "combine",
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  group = augroup,
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
        vim.api.nvim_buf_set_extmark(0, ns_id, vim.fn.line(".") - 1, vim.fn.col(".") - 1, opts)
    end,
    {
        noremap = true,
    }
)

Warning

Not sure this works as expected for ghost text or how the result comes in. So you may have to modified and play around a bit.

Thanks for sharing!

SilasMarvin commented 2 weeks ago

Got it. I'll ask in the matrix server.

Ok great!

We could also have both?

Yes, in my understanding it's not unusual in the nvim ecosystem to have an additional plugin for language server specific features while keeping the minimal config in the lspconfig plugin, it's even the official recommendation.

Got it that makes sense. I'll ask around about defaults in the matrix probably tomorrow.

Would you want to head up our neovim plugin? I'm thinking for now we just fork llm-ls, edit the configuration options to match the options I have for our VS Code plugin, and then just have it perform inline completion with ghost text.

SilasMarvin commented 2 weeks ago

Got it. I'll ask in the matrix server.

Ok great!

We could also have both?

Yes, in my understanding it's not unusual in the nvim ecosystem to have an additional plugin for language server specific features while keeping the minimal config in the lspconfig plugin, it's even the official recommendation.

I asked on the Matrix, they recommended discussing it on their GitHub. I suggest we create a PR that requires the user to provide defaults and have a discussion about it in that PR.

Robzz commented 2 weeks ago

Would you want to head up our neovim plugin? I'm thinking for now we just fork llm-ls, edit the configuration options to match the options I have for our VS Code plugin, and then just have it perform inline completion with ghost text.

I'm not sure how long term I can commit to it, but sure I'm happy to at least help it take it off the ground.

I asked on the Matrix, they recommended discussing it on their GitHub. I suggest we create a PR that requires the user to provide defaults and have a discussion about it in that PR.

Alright I'll send them the PR hopefully tomorrow to get the discussion going.

SilasMarvin commented 2 weeks ago

Would you want to head up our neovim plugin? I'm thinking for now we just fork llm-ls, edit the configuration options to match the options I have for our VS Code plugin, and then just have it perform inline completion with ghost text.

I'm not sure how long term I can commit to it, but sure I'm happy to at least help it take it off the ground.

I asked on the Matrix, they recommended discussing it on their GitHub. I suggest we create a PR that requires the user to provide defaults and have a discussion about it in that PR.

Alright I'll send them the PR hopefully tomorrow to get the discussion going.

This is awesome thank you!

SuperBo commented 2 weeks ago

Hi @SilasMarvin, @Robzz, what is you guys final decision. If you decided to start a dedicated plugin, I'm happy to help (FYI, I'm developing another Neovim plugin https://github.com/SuperBo/fugit2.nvim).

SilasMarvin commented 2 weeks ago

Hi @SilasMarvin, @Robzz, what is you guys final decision. If you decided to start a dedicated plugin, I'm happy to help (FYI, I'm developing another Neovim plugin https://github.com/SuperBo/fugit2.nvim).

We definitely want to have a dedicated plugin, if you want to get started on it that would be awesome! We have one for VS Code that should be a good reference: https://github.com/SilasMarvin/lsp-ai/tree/main/editors/vscode Here is an overview on the wiki about it: https://github.com/SilasMarvin/lsp-ai/wiki/Plugins

You could also fork: https://github.com/huggingface/llm.nvim and use it as a base. I'm happy to give more input if you want, just let me know!

SuperBo commented 2 weeks ago

@SilasMarvin, ok. Will start working on it tomorrow. Are you ok with name lsp-ai.nvim. Do you suggest any other name :D?

SilasMarvin commented 2 weeks ago

@SilasMarvin, ok. Will start working on it tomorrow. Are you ok with name lsp-ai.nvim. Do you suggest any other name :D?

That is a great name for it I love it! Let me know how it goes excited to see it!

AlejandroSuero commented 2 weeks ago

@SuperBo @SilasMarvin I created this template repo for Neovim plugins, it has .editorconfig, selene and stylua ready to go with easy to use make targets and CI.

It also has plenary.nvim and vusted tests set up and ready to use with make targets and CI.

It also has more utilities but that is more of a personal opinion on why you should use something like codespell but you can always ignore it and delete it.

SuperBo commented 2 weeks ago

@AlejandroSuero, Thank you for good template, can I cherry pick your selene and stylua config?

For testing, I prefer native busted + nlua setup. I also need to add neorocks formula also. So I will start an empty repo first without any template. Hope that don't bother you!

AlejandroSuero commented 2 weeks ago

@SuperBo for the neorocks I have something done in https://github.com/AlejandroSuero/freeze-code.nvim

Cherry pick what you want, is free to use.

I haven't got to try busted + nlua yet, usually I stick to testing with neovim since it how my plugins interact most of the time. Any good places to take a look on how to start with busted + nlua?

SuperBo commented 2 weeks ago

@AlejandroSuero, you can see the sample setup here https://github.com/SuperBo/fugit2.nvim.

I decided to fork from https://github.com/huggingface/llm.nvim. I saw some of your pullrequest (https://github.com/huggingface/llm.nvim/pull/98, https://github.com/huggingface/llm.nvim/pull/97) there. Could I merge it into my fork :D?

AlejandroSuero commented 2 weeks ago

@SuperBo yeah, go for it.

I will be checking out your git plugin tomorrow and also get a look for the testing setup.

SuperBo commented 1 week ago

https://github.com/SilasMarvin/lsp-ai/assets/2666479/5d40b301-ac03-4788-a3b6-9306344c34e8

First update guys, we now can ask AI for whole file code completion.

SilasMarvin commented 1 week ago

That is awesome!!! I love it. This is really exciting stuff!

SuperBo commented 1 week ago

Can anyone help me to test this https://github.com/SuperBo/lsp-ai.nvim.

Example config lazy config can be like this:

  {
    'SuperBo/lsp-ai.nvim',
    opts = {
      -- autostart = false,
      server = {
        memory = {
          file_store = {},
        },
        models = {
          model1 =  {
            type="llama_cpp",
            file_path="/opt/model/codeqwen-1_5-7b-chat-q4_k_m.gguf",
            n_ctx=512,
            -- ctx_size= 512,
            n_gpu_layers= 500,
          }
        }
      },
      generation = {
        model = "model1",
        parameters = {
          max_tokens=256,
          max_context=1024,
          messages = {
            {
              role="system",
              content="You are a programming completion tool. Replace <CURSOR> with the correct code."
            },
            {
              role = "user",
              content = "{CODE}"
            }
          }
        }
      }
    },
    dependencies = { 'neovim/nvim-lspconfig' },
  }

Command to ask LSP-AI is LSPAIGenerate.

fredrikaverpil commented 1 week ago

I had to do these modifications:

  {
-    dir ='SuperBo/lsp-ai.nvim',
+    "SuperBo/lsp-ai.nvim",
    opts = {
      -- autostart = false,
      server = {
        memory = {
          file_store = {},
        },
        models = {
          model1 =  {
            type="llama_cpp",
            file_path="/opt/model/codeqwen-1_5-7b-chat-q4_k_m.gguf",
            n_ctx=512,
            -- ctx_size= 512,
            n_gpu_layers= 500,
          }
        }
      },
      generation = {
        model = "model1",
        parameters = {
          max_tokens=256,
          max_context=1024,
          messages = {
            {
              role="system",
              content="You are a programming completion tool. Replace <CURSOR> with the correct code."
            },
            {
              role = "user",
              content = "{CODE}"
            }
          }
        }
      }
    },
    dependencies = { 'neovim/nvim-lspconfig' },
+     config = function(_, opts)
+       require("lsp_ai").setup(opts)
+     end,
  }

But I can't run :LSPAIGenerate:

   Error  08:52:51 msg_show.emsg   LSPAIGenerate E492: Not an editor command: LSPAIGenerate
SuperBo commented 1 week ago

@fredrikaverpil, what language are you testing (python, go, ...). I've just hard code these support file types: { "go", "java", "python", "rust" }. Will make this configurable later.

Do you have lsp-ai compiled with llama_cpp? You can test with openai if you have OPENAI key.

You don't need to add config to dependencies.

fredrikaverpil commented 1 week ago

@SuperBo oh, I was just looking at a README.md file when trying this out 😆

If you like, we can continue this discussion in my dotfiles/nvim PR, as I'm facing errors when opening a .go file inside this repo.

   Warn  09:10:41 notify.warn Client lsp_ai quit with exit code 1 and signal 0. Check log for errors: /Users/fredrik/.local/state/fredrik/lsp.log
   Error  09:10:41 msg_show.lua_error Error executing vim.schedule lua callback: ....10.0/share/nvim/runtime/lua/vim/lsp/semantic_tokens.lua:185: Invalid buffer id: 18
stack traceback:
    [C]: in function 'nvim_buf_attach'
    ....10.0/share/nvim/runtime/lua/vim/lsp/semantic_tokens.lua:185: in function 'new'
    ....10.0/share/nvim/runtime/lua/vim/lsp/semantic_tokens.lua:612: in function 'start'
    .../neovim/0.10.0/share/nvim/runtime/lua/vim/lsp/client.lua:959: in function <.../neovim/0.10.0/share/nvim/runtime/lua/vim/lsp/client.lua:957>
   Error  09:10:42 msg_show.emsg LSP[gopls]: Error SERVER_REQUEST_HANDLER_ERROR: ".../Cellar/neovim/0.10.0/share/nvim/runtime/lua/vim/lsp.lua:310: Invalid buffer id: 18"
   Error  09:10:42 msg_show.lua_error Error executing vim.schedule lua callback: ....10.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:154: Invalid buffer id: 19
stack traceback:
    [C]: in function 'nvim_buf_get_name'
    ....10.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:154: in function 'init'
    .../neovim/0.10.0/share/nvim/runtime/lua/vim/lsp/client.lua:907: in function '_text_document_did_open_handler'
    .../neovim/0.10.0/share/nvim/runtime/lua/vim/lsp/client.lua:942: in function '_on_attach'
    .../neovim/0.10.0/share/nvim/runtime/lua/vim/lsp/client.lua:615: in function ''
    vim/_editor.lua: in function <vim/_editor.lua:0>
SilasMarvin commented 2 days ago

@fredrikaverpil, what language are you testing (python, go, ...). I've just hard code these support file types: { "go", "java", "python", "rust" }. Will make this configurable later.

Do you have lsp-ai compiled with llama_cpp? You can test with openai if you have OPENAI key.

You don't need to add config to dependencies.

Following up on this thread. Been talking in the Discord a little bit about our Neovim integration and would love to get you in there @SuperBo Link is in the README

SuperBo commented 2 days ago

@SilasMarvin, sorry, I've been quite busy since last week. I will have more free time this weekend. See you in discord.