olimorris / codecompanion.nvim

✨ AI-powered coding, seamlessly in Neovim. Supports Anthropic, Copilot, Gemini, Ollama, OpenAI and xAI LLMs
MIT License
871 stars 63 forks source link

[Bug]: Copilot adapter parameters overwritten when stream is true #363

Open lucobellic opened 2 hours ago

lucobellic commented 2 hours ago

Your minimal.lua config

Same as default

---@diagnostic disable: missing-fields

--NOTE: Set config path to enable the copilot adapter to work.
--It will search the follwoing paths for the for copilot token:
--  - "$CODECOMPANION_TOKEN_PATH/github-copilot/hosts.json"
--  - "$CODECOMPANION_TOKEN_PATH/github-copilot/apps.json"
vim.env["CODECOMPANION_TOKEN_PATH"] = vim.fn.expand("~/.config")

vim.env.LAZY_STDPATH = ".repro"
load(vim.fn.system("curl -s https://raw.githubusercontent.com/folke/lazy.nvim/main/bootstrap.lua"))()

-- Your CodeCompanion setup
local plugins = {
  {
    "olimorris/codecompanion.nvim",
    dependencies = {
      { "nvim-treesitter/nvim-treesitter", build = ":TSUpdate" },
      { "nvim-lua/plenary.nvim" },
      { "hrsh7th/nvim-cmp" },
      { "stevearc/dressing.nvim", opts = {} },
      { "nvim-telescope/telescope.nvim" },
    },
    opts = {
      --Refer to: https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/config.lua
      strategies = {
        --NOTE: Change the adapter as required
        chat = { adapter = "copilot" },
        inline = { adapter = "copilot" },
      },
      opts = {
        log_level = "DEBUG",
      },
    },
  },
}

require("lazy.minit").repro({ spec = plugins })

-- Setup Tree-sitter
local ts_status, treesitter = pcall(require, "nvim-treesitter.configs")
if ts_status then
  treesitter.setup({
    ensure_installed = { "lua", "markdown", "markdown_inline", "yaml" },
    highlight = { enable = true },
  })
end

-- Setup completion
local cmp_status, cmp = pcall(require, "cmp")
if cmp_status then
  cmp.setup({
    mapping = cmp.mapping.preset.insert({
      ["<C-b>"] = cmp.mapping.scroll_docs(-4),
      ["<C-f>"] = cmp.mapping.scroll_docs(4),
      ["<C-Space>"] = cmp.mapping.complete(),
      ["<C-e>"] = cmp.mapping.abort(),
      ["<CR>"] = cmp.mapping.confirm({ select = true }),
      -- Accept currently selected item. Set `select` to `false` to only confirm explicitly selected items.
    }),
  })
end

Error messages

N/A

Log output

[TRACE] 2024-10-24 22:26:54
Chat created with ID 8973059
[TRACE] 2024-10-24 22:26:54
Chat opened with ID 8973059
[TRACE] 2024-10-24 22:26:54
Setting the header for the chat buffer
[TRACE] 2024-10-24 22:26:54
Rendering headers in the chat buffer
[DEBUG] 2024-10-24 22:26:56
Settings:
{
  env = {
    api_key = <function 1>
  },
  features = {
    text = true,
    tokens = false,
    vision = false
  },
  handlers = {
    chat_output = <function 2>,
    form_messages = <function 3>,
    form_parameters = <function 4>,
    inline_output = <function 5>,
    on_exit = <function 6>,
    setup = <function 7>
  },
  headers = {
    Authorization = "Bearer ${api_key}",
    ["Content-Type"] = "application/json",
    ["Copilot-Integration-Id"] = "vscode-chat",
    ["editor-version"] = "Neovim/0.11.0"
  },
  name = "copilot",
  opts = {
    stream = true
  },
  parameters = {
    max_tokens = 4096,
    model = "gpt-4o-2024-05-13",
    n = 1,
    temperature = 0,
    top_p = 1
  },
  raw = { "--no-buffer", "--silent" },
  roles = {
    llm = "assistant",
    user = "user"
  },
  schema = {
    max_tokens = {
      default = 4096,
      desc = "The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.",
      mapping = "parameters",
      order = 3,
      type = "integer"
    },
    model = {
      choices = { "gpt-4o-2024-05-13" },
      default = "gpt-4o-2024-05-13",
      desc = "ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.",
      mapping = "parameters",
      order = 1,
      type = "enum"
    },
    n = {
      default = 1,
      desc = "How many chat completions to generate for each prompt.",
      mapping = "parameters",
      order = 5,
      type = "number"
    },
    temperature = {
      default = 0,
      desc = "What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.",
      mapping = "parameters",
      order = 2,
      type = "number"
    },
    top_p = {
      default = 1,
      desc = "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.",
      mapping = "parameters",
      order = 4,
      type = "number"
    }
  },
  url = "https://api.githubcopilot.com/chat/completions",
  <metatable> = {
    __index = {
      extend = <function 8>,
      get_default_settings = <function 9>,
      get_env_vars = <function 10>,
      map_roles = <function 11>,
      map_schema_to_params = <function 12>,
      new = <function 13>,
      resolve = <function 14>,
      set_env_vars = <function 15>
    }
  }
}
[DEBUG] 2024-10-24 22:26:56
Messages:
{ {
    content = "You are an AI programming assistant named \"CodeCompanion\".\nYou are currently plugged in to the Neovim text editor on a user's machine.\n\nYour core tasks include:\n- Answering general programming questions.\n- Explaining how the code in a Neovim buffer works.\n- Reviewing the selected code in a Neovim buffer.\n- Generating unit tests for the selected code.\n- Proposing fixes for problems in the selected code.\n- Scaffolding code for a new workspace.\n- Finding relevant code to the user's query.\n- Proposing fixes for test failures.\n- Answering questions about Neovim.\n- Running tools.\n\nYou must:\n- Follow the user's requirements carefully and to the letter.\n- Keep your answers short and impersonal, especially if the user responds with context outside of your tasks.\n- Minimize other prose.\n- Use Markdown formatting in your answers.\n- Include the programming language name at the start of the Markdown code blocks.\n- Avoid line numbers in code blocks.\n- Avoid wrapping the whole response in triple backticks.\n- Only return code that's relevant to the task at hand. You may not need to return all of the code that the user has shared.\n\nWhen given a task:\n1. Think step-by-step and describe your plan for what to build in pseudocode, written out in great detail, unless asked not to do so.\n2. Output the code in a single code block, being careful to only return relevant code.\n3. You should always generate short suggestions for the next user turns that are relevant to the conversation.\n4. You can only give one reply for each conversation turn.",
    id = 232375990,
    opts = {
      visible = false
    },
    role = "system"
  }, {
    content = "Say Hello",
    id = -1770108773,
    opts = {
      visible = true
    },
    role = "user"
  } }
[INFO] 2024-10-24 22:26:56
Chat request started
[DEBUG] 2024-10-24 22:26:56
Authorizing GitHub Copilot token
[DEBUG] 2024-10-24 22:26:57
Reusing GitHub Copilot token
[DEBUG] 2024-10-24 22:26:57
Request:
{ "-sSL", "-D", "/run/user/1000/plenary_curl_b48a4193.headers", "--compressed", "-X", "POST", "-H", "Authorization: Bearer tid=...;exp=1729803416;sku=monthly_subscriber;st=dotcom;chat=1;cit=1;malfil=1;8kp=1;ip=...;asn=...", "-H", "Content-Type: application/json", "-H", "Copilot-Integration-Id: vscode-chat", "-H", "Editor-Version: Neovim/0.11.0", "--data-raw", "{\"messages\":[{\"content\":\"You are an AI programming assistant named \\\"CodeCompanion\\\".\\nYou are currently plugged in to the Neovim text editor on a user's machine.\\n\\nYour core tasks include:\\n- Answering general programming questions.\\n- Explaining how the code in a Neovim buffer works.\\n- Reviewing the selected code in a Neovim buffer.\\n- Generating unit tests for the selected code.\\n- Proposing fixes for problems in the selected code.\\n- Scaffolding code for a new workspace.\\n- Finding relevant code to the user's query.\\n- Proposing fixes for test failures.\\n- Answering questions about Neovim.\\n- Running tools.\\n\\nYou must:\\n- Follow the user's requirements carefully and to the letter.\\n- Keep your answers short and impersonal, especially if the user responds with context outside of your tasks.\\n- Minimize other prose.\\n- Use Markdown formatting in your answers.\\n- Include the programming language name at the start of the Markdown code blocks.\\n- Avoid line numbers in code blocks.\\n- Avoid wrapping the whole response in triple backticks.\\n- Only return code that's relevant to the task at hand. You may not need to return all of the code that the user has shared.\\n\\nWhen given a task:\\n1. Think step-by-step and describe your plan for what to build in pseudocode, written out in great detail, unless asked not to do so.\\n2. Output the code in a single code block, being careful to only return relevant code.\\n3. You should always generate short suggestions for the next user turns that are relevant to the conversation.\\n4. You can only give one reply for each conversation turn.\",\"role\":\"system\"},{\"content\":\"Say Hello\",\"role\":\"user\"}],\"stream\":true}", "--no-buffer", "--silent", "https://api.githubcopilot.com/chat/completions" }
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[],"created":0,"id":"","prompt_filter_results":[{"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"prompt_index":0}]}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":"","role":"assistant"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Rendering headers in the chat buffer
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":"Hello"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":"!"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":" How"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":" can"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":" I"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":" assist"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":" you"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":" today"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":"?"}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: {"choices":[{"finish_reason":"stop","index":0,"content_filter_offsets":{"check_offset":1643,"start_offset":1643,"end_offset":1677},"content_filter_results":{"error":{"code":"","message":""},"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"delta":{"content":null}}],"created":1729801617,"id":"chatcmpl-ALyi1YvcoXmv6JER4NJdac680zcup","usage":{"completion_tokens":9,"prompt_tokens":344,"total_tokens":353},"model":"gpt-3.5-turbo-0613"}
[TRACE] 2024-10-24 22:26:57
Output data:
data: [DONE]
[TRACE] 2024-10-24 22:26:57
Rendering headers in the chat buffer
[INFO] 2024-10-24 22:26:57
Chat request completed

Health check output

N/A

Describe the bug

When using the Copilot adapter with the stream set to true (the default value), all adapter parameters are overwritten. Then without provided model, Copilot will always use the gpt-3.5-turbo-0613 model by default.

This issue is caused by the setup function in the copilot.lua file.

    setup = function(self)
      if self.opts and self.opts.stream then
        self.parameters = {
          stream = true,
        }
      end
    ...

This can be fixed using same code as openai adapter with:

    setup = function(self)
      if self.opts and self.opts.stream then
        self.parameters.stream = true
      end
    ...

Reproduce the bug

No response

Final checks

olimorris commented 2 hours ago

Please can you share the minimal so I can recreate

lucobellic commented 1 hour ago

Added minimal.lua, but it is the same as the default. Also added the output of the Say Hello request in the chat. The request does not contain model/schema parameters, but only stream=true.

olimorris commented 17 minutes ago

Oh my that is a huge spot! Thank you. Do you want to submit the PR to take the credit for this and I'll merge straight away

lucobellic commented 4 minutes ago

Oh my that is a huge spot! Thank you. Do you want to submit the PR to take the credit for this and I'll merge straight away

Thank you, but you can commit the fix directly.