gsuuon / model.nvim

Neovim plugin for interacting with LLM's and building editor integrated prompts.
MIT License
293 stars 21 forks source link

[Feature request] proxy support for curl? #31

Closed kohane27 closed 8 months ago

kohane27 commented 8 months ago

Hello there. Hope you're doing well. Thank you for creating this llm plugin. This plugin of yours suits my workflow better, so thank you very much!

Description

I'm using proton-privoxy because I need to use a proxy through a VPN to access api.openai.com .

I'm writing to see if you're willing to add support for proxy?

Expected behavior

Currently I can use below:

curl --proxy http://127.0.0.1:8888 ifconfig.co/json | jq

Could you please add some kind of configuration to allow users to use curl through a proxy?

Thank you again!

gsuuon commented 8 months ago

Hi @kohane27! Yeah I think this is straightforward enough, is it just providing some extra arguments to curl?

kohane27 commented 8 months ago

Thank you for getting back to me. I appreciate it.

Yes!

# without proxy
curl ifconfig.co/json
# with proxy
curl --proxy http://127.0.0.1:8888 ifconfig.co/json
kohane27 commented 8 months ago

Thank you so much for such a quick response and even quicker impl!

I was testing it with below:

  {
    "gsuuon/llm.nvim",
    config = function()
      local openai = require("llm.providers.openai")
      require("llm").setup({
        gpt = vim.tbl_extend("force", openai.default_prompt, {
          options = {
            curl_args = { "--proxy", "http://127.0.0.1:8888" },
          },
        }),
      })
    end,
  },

Then I tested with :Llm gpt.

Even if I changed the port number to 1888, I can still get a response, which should not happen. So it seems the above configuration doesn't have an effect. Is there anything wrong with my configuration? Thank you again!

kohane27 commented 8 months ago

I also tried the following but to no avail:

local openai = require("llm.providers.openai")
openai.initialize({
  model = "gpt-3.5-turbo-0301",
  max_tokens = 400,
  temperature = 0.2,
})
require("llm").setup({
  default_prompt = {
    provider = openai,
    options = {
      curl_args = { "--proxy", "http://127.0.0.1:8888" },
    },
  },
})
gsuuon commented 8 months ago

Huh - I can't repro this.

  curl_args = vim.tbl_extend('force', openai.default_prompt, {
    options = {
      curl_args = { '--this-should-break' }
    }
  }),
2023-11-01T12:29:47 stream error curl  ERROR curl: option --this-should-break: is unknown
curl: try 'curl --help' for more information
  curl_args = vim.tbl_extend('force', openai.default_prompt, {
    options = {
      curl_args = { '--proxy', 'http://127.0.0.1:8888' }
    }
  }),
2023-11-01T12:31:00 stream error curl  ERROR curl: (7) Failed to connect to 127.0.0.1 port 8888 after 2046 ms: Couldn't connect to server

Can you double check that the plugin was updated successfully and which commit you're on?

gsuuon commented 8 months ago

Oh I just noticed - neither of those setups are correct (not your fault, #28 the readme needs improving). Prompts need to be in the prompts field of setup:

{
  "gsuuon/llm.nvim",
  config = function()
    local openai = require("llm.providers.openai")
    require("llm").setup({
      prompts = {
        gpt = vim.tbl_extend("force", openai.default_prompt, {
          options = {
            curl_args = { "--proxy", "http://127.0.0.1:8888" },
          },
        }),
      }
    })
  end,
},

And prompts need to have a builder field - I think this should've errored actually:

require("llm").setup({
  default_prompt = {
    provider = openai,
    options = {
      curl_args = { "--proxy", "http://127.0.0.1:8888" },
    },
    builder = function(input)
      ...
    end
  },
})
kohane27 commented 8 months ago

Thank you for getting back to me! I did try multiple ways to set it up and looked into the source code briefly to try to figure out the configuration before asking for help. Hopefully it's not too much annoyance and hand-holding.

Both the configuration you provided works!

For the default_prompt, I have the following (if anyone else is interested):

require("llm").setup({
  default_prompt = {
    provider = openai,
    options = {
      curl_args = { "--proxy", "http://127.0.0.1:8888" },
    },
    builder = function(input)
      return {
        model = "gpt-3.5-turbo-0301",
        temperature = 0.3,
        max_tokens = 120,
        messages = {
          {
            role = "system",
            content = "You are a helpful assistant.",
          },
          {
            role = "user",
            content = input,
          },
        },
      }
    end,
  },
})

Thank you again and have a good day!