David-Kunz / gen.nvim

Neovim plugin to generate text using LLMs with customizable prompts
The Unlicense
992 stars 64 forks source link

Some problem "Expected Lua number" when use plugin. #49

Closed Aldans closed 7 months ago

Aldans commented 7 months ago

Hello David, Thanks for that great plugin it awesome.

Could please help me fix this problem:

curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d '{"model": "mistral", "stream": true, "prompt": "generate kubernetes manifest file as pod with busybox"}'
...my-user/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:292: Expected Lua number
stack traceback:
^I[C]: in function 'nvim_buf_delete'
^I...my-ser/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:292: in function <...my-user/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:268> function: builtin#1
8 ...my-user/.local/share/nvim/lazy/gen.nvim/lua/gen/init.lua:292: Expected Lua number

Setup:

astronvim: require("astronvim.health").check()

AstroNvim ~
- AstroNvim Version: v3.39.0
- Neovim Version: v0.9.2
- OK Using stable Neovim >= 0.8.0
- OK `git` is installed: Used for core functionality such as updater and plugin management
- OK `open` is installed: Used for `gx` mapping for opening files with system opener (Optional)
- OK `lazygit` is installed: Used for mappings to pull up git TUI (Optional)
- OK `node` is installed: Used for mappings to pull up node REPL (Optional)
- OK `gdu` is installed: Used for mappings to pull up disk usage analyzer (Optional)
- WARNING `btm` is not installed: Used for mappings to pull up system monitor (Optional)
- OK `python` is installed: Used for mappings to pull up python REPL (Optional)

Plugin config:

return {
  "David-Kunz/gen.nvim",
  keys = {
    { "<leader>ga", ":Gen<CR>", desc = "Ollama Generate", mode = { "n", "v" } }, -- This Works
  },
    model = "mistral", -- The default model to use.
    display_mode = "float", -- The display mode. Can be "float" or "split".
    show_prompt = false, -- Shows the Prompt submitted to Ollama.
    show_model = false, -- Displays which model you are using at the beginning of your chat session.
    no_auto_close = false, -- Never closes the window automatically.
    init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,
    -- Function to initialize Ollama
    command = "curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d $body",
    -- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).
    -- This can also be a lua function returning a command string, with options as the input parameter.
    -- The executed command must return a JSON object with { response, context }
    -- (context property is optional).
    -- list_models = "<function>", -- Retrieves a list of model names
    -- debug = false, -- Prints errors and the command which is run.
    debug = true, -- Prints errors and the command which is run.
  },
}
David-Kunz commented 7 months ago

Hi @Aldans ,

Thank you for these kind words and your issue report.

This should be fixed with https://github.com/David-Kunz/gen.nvim/commit/1319b03357fd7017bbaf1d45cd6b72bd9e106226

Please let me know if it works for you!

Best regards, David

Aldans commented 7 months ago

Looks like now it is work well

Thanks @David-Kunz