David-Kunz / gen.nvim

Neovim plugin to generate text using LLMs with customizable prompts
The Unlicense
977 stars 62 forks source link

Blank output with the setup config in readme #61

Closed guoliang closed 3 months ago

guoliang commented 5 months ago

I have tried the following

-- Custom Parameters (with defaults)
{
    "David-Kunz/gen.nvim",
    opts = {
        model = "mistral", -- The default model to use.
        display_mode = "float", -- The display mode. Can be "float" or "split".
        show_prompt = false, -- Shows the Prompt submitted to Ollama.
        show_model = false, -- Displays which model you are using at the beginning of your chat session.
        no_auto_close = false, -- Never closes the window automatically.
        init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,
        -- Function to initialize Ollama
        command = "curl --silent --no-buffer -X POST http://localhost:11434/api/generate -d $body",
        -- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).
        -- This can also be a lua function returning a command string, with options as the input parameter.
        -- The executed command must return a JSON object with { response, context }
        -- (context property is optional).
        list_models = '<omitted lua function>', -- Retrieves a list of model names
        debug = false -- Prints errors and the command which is run.
    }
},

and

require('gen').setup({
  -- same as above
})

What seems to work for me was using the following setup given here https://github.com/David-Kunz/gen.nvim/issues/32#issuecomment-1817778455

However the output is just one line with no line breaks or word wrap.

David-Kunz commented 5 months ago

Hi @guoliang ,

Could you try to pin-point what of that configuration is causing the issues? Could you comment out all lines, until you find the right one?

Thanks and best regards, David

alekspickle commented 4 months ago

fellow idiot here. I'm not a bright lua master, so for those not using packer(or just not lazy) - only contents of opts goes to setup block. Experienced the same thing issue starter did, and then tried the abovementioned and it worked. It will go like this:

require('gen').setup({
  model = ...
  ...
  debug = false
})
mcginley-s1 commented 4 months ago

@David-Kunz I narrowed this down to 4be150b7421bc0f24c541916502e382666d1d107. Using Ollama locally following the default setup, I checked out a0a8ef951feba78613c6bd5558cd95949c0df9f5, and everything works. When I checkout 4be150b7421bc0f24c541916502e382666d1d107, aka merge of https://github.com/David-Kunz/gen.nvim/commit/4be150b7421bc0f24c541916502e382666d1d107 then it stops working. I have to do a few things, but I can do more debugging on that particular PR worst-case tomorrow. Sorry about all the edits I keep clicking the hash itself expecting a copy :S I'm a bad user of github hahah.

ojdaugtelpa commented 4 months ago
    .. "/api/chat -d $body"
-- .. "/api/generate -d $body"

I changed endpoint from generate to chat and it works for me.

tjex commented 3 months ago

the repo setup has now changed and is functioning by default on my end, so I would say this issue can be closed @guoliang .