Open fredrikaverpil opened 4 months ago
Is it possible that the desired model is hardcoded to model1?
model1
I've got this config:
return { { "SuperBo/lsp-ai.nvim", dependencies = { "neovim/nvim-lspconfig" }, opts = { autostart = true, server = { memory = { file_store = {}, }, models = { llama_cpp = { type = "llama_cpp", file_path = vim.fn.expand("~/code/public/CodeQwen1.5-7B-Chat-GGUF/codeqwen-1_5-7b-chat-q4_k_m.gguf"), n_ctx = 1024 * 4, n_gpu_layers = 500, }, ollama = { type = "ollama", model = "llama3", }, }, }, completion = { model = "ollama", parameters = { messages = { { role = "system", content = "You are a programming completion tool. Replace <CURSOR> with the correct code.", }, { role = "user", content = "{CODE}", }, }, }, max_context_size = 1024 * 4, }, }, config = function(_, opts) require("lsp_ai").setup(opts) end, }, }
And I get this error:
Error 16:06:45 notify.error [LSP-AI] can't find model: model1
If I instead rename e.g. the ollama key into model1, I don't get the error and the completion works:
ollama
return { { "SuperBo/lsp-ai.nvim", dependencies = { "neovim/nvim-lspconfig" }, opts = { autostart = true, server = { memory = { file_store = {}, }, models = { llama_cpp = { type = "llama_cpp", file_path = vim.fn.expand("~/code/public/CodeQwen1.5-7B-Chat-GGUF/codeqwen-1_5-7b-chat-q4_k_m.gguf"), n_ctx = 1024 * 4, n_gpu_layers = 500, }, - ollama = { + model1 = { type = "ollama", model = "llama3", }, }, }, completion = { - model = "ollama", + model = "model1", parameters = { messages = { { role = "system", content = "You are a programming completion tool. Replace <CURSOR> with the correct code.", }, { role = "user", content = "{CODE}", }, }, }, max_context_size = 1024 * 4, }, }, config = function(_, opts) require("lsp_ai").setup(opts) end, }, }
Is it possible that the desired model is hardcoded to
model1
?I've got this config:
And I get this error:
If I instead rename e.g. the
ollama
key intomodel1
, I don't get the error and the completion works: