Kurama622 / llm.nvim

Free large language model (LLM) support for Neovim, provides commands to interact with LLM (like ChatGPT, ChatGLM, kimi and local llm). Support Github models.
52 stars 2 forks source link

Can I use local llm modle with ollama? #5

Closed R0boter closed 6 days ago

Kurama622 commented 6 days ago

It should be feasible, but I haven't tried it because my computer doesn't have a GPU.

Set LLM_KEY arbitrarily in the shell.

Try this:

{
    "Kurama622/llm.nvim",
    dependencies = { "nvim-lua/plenary.nvim", "MunifTanjim/nui.nvim" },
    cmd = { "LLMSesionToggle", "LLMSelectedTextHandler", "LLMAppHandler" },
    config = function()
        local tools = require("llm.common.tools")
        require("llm").setup({
            url = "http://localhost:xxxxx",
            model = "your_model_name",
            api_type = "openai",
            max_tokens = 4096,

            temperature = 0.3,
            top_p = 0.7,

            prompt = "You are a helpful chinese assistant.",

            prefix = {
                user = { text = "😃 ", hl = "Title" },
                assistant = { text = "  ", hl = "Added" },
            },

            save_session = true,
            max_history = 15,

      -- stylua: ignore
      keys = {
        -- The keyboard mapping for the input window.
        ["Input:Submit"]      = { mode = "n", key = "<cr>" },
        ["Input:Cancel"]      = { mode = {"n", "i"}, key = "<C-c>" },
        ["Input:Resend"]      = { mode = {"n", "i"}, key = "<C-r>" },

        -- only works when "save_session = true"
        ["Input:HistoryNext"] = { mode = {"n", "i"}, key = "<C-j>" },
        ["Input:HistoryPrev"] = { mode = {"n", "i"}, key = "<C-k>" },

        -- The keyboard mapping for the output window in "split" style.
        ["Output:Ask"]        = { mode = "n", key = "i" },
        ["Output:Cancel"]     = { mode = "n", key = "<C-c>" },
        ["Output:Resend"]     = { mode = "n", key = "<C-r>" },

        -- The keyboard mapping for the output and input windows in "float" style.
        ["Session:Toggle"]    = { mode = "n", key = "<leader>ac" },
        ["Session:Close"]     = { mode = "n", key = {"<esc>", "Q"} },
      },
        })
    end,
    keys = {
        { "<leader>ac", mode = "n", "<cmd>LLMSessionToggle<cr>" },
    },
}

If it's not feasible, please provide a feasible curl command, such as:

curl http://localhost/xxxx -N -X POST -H "Content-Type: application/json" -d {"messages": [{"role": "user", "content": "Translate the following text to Chinese, please only return the translation: Just for fun."}], "top_p": 0.7, "max_tokens": 4096, "model": "<your model name>", "temperature": 0.3}

And I can add this feature.

R0boter commented 6 days ago

It should be feasible, but I haven't tried it because my computer doesn't have a GPU.

Set LLM_KEY arbitrarily in the shell.

Try this:

{
  "Kurama622/llm.nvim",
  dependencies = { "nvim-lua/plenary.nvim", "MunifTanjim/nui.nvim" },
  cmd = { "LLMSesionToggle", "LLMSelectedTextHandler", "LLMAppHandler" },
  config = function()
      local tools = require("llm.common.tools")
      require("llm").setup({
          url = "http://localhost:xxxxx",
          model = "your_model_name",
          api_type = "openai",
          max_tokens = 4096,

          temperature = 0.3,
          top_p = 0.7,

          prompt = "You are a helpful chinese assistant.",

          prefix = {
              user = { text = "😃 ", hl = "Title" },
              assistant = { text = "  ", hl = "Added" },
          },

          save_session = true,
          max_history = 15,

      -- stylua: ignore
      keys = {
        -- The keyboard mapping for the input window.
        ["Input:Submit"]      = { mode = "n", key = "<cr>" },
        ["Input:Cancel"]      = { mode = {"n", "i"}, key = "<C-c>" },
        ["Input:Resend"]      = { mode = {"n", "i"}, key = "<C-r>" },

        -- only works when "save_session = true"
        ["Input:HistoryNext"] = { mode = {"n", "i"}, key = "<C-j>" },
        ["Input:HistoryPrev"] = { mode = {"n", "i"}, key = "<C-k>" },

        -- The keyboard mapping for the output window in "split" style.
        ["Output:Ask"]        = { mode = "n", key = "i" },
        ["Output:Cancel"]     = { mode = "n", key = "<C-c>" },
        ["Output:Resend"]     = { mode = "n", key = "<C-r>" },

        -- The keyboard mapping for the output and input windows in "float" style.
        ["Session:Toggle"]    = { mode = "n", key = "<leader>ac" },
        ["Session:Close"]     = { mode = "n", key = {"<esc>", "Q"} },
      },
      })
  end,
  keys = {
      { "<leader>ac", mode = "n", "<cmd>LLMSessionToggle<cr>" },
  },
}

If it's not feasible, please provide a feasible curl command, such as:

curl http://localhost/xxxx -N -X POST -H "Content-Type: application/json" -d {"messages": [{"role": "user", "content": "Translate the following text to Chinese, please only return the translation: Just for fun."}], "top_p": 0.7, "max_tokens": 4096, "model": "<your model name>", "temperature": 0.3}

And I can add this feature.

thinks for your job, i'll try it later, if u does't have time, maybe i can try to add this feature

R0boter commented 6 days ago

it's not work with local llm model, but i found a plugin that specializes in using local llm models, so i will close this issue, thinks again for your jobs

Kurama622 commented 4 days ago

it's not work with local llm model, but i found a plugin that specializes in using local llm models, so i will close this issue, thinks again for your jobs

ok, I will add the feature later

Kurama622 commented 3 days ago

Now, I have added the feature: https://github.com/Kurama622/llm.nvim?tab=readme-ov-file#local-llm

R0boter commented 3 days ago

现在,我添加了该功能: https://github.com/Kurama622/ llm .nvim?tab=readme-ov-file#local- llm

It's worked!!! Awesome thks