ms-jpq / coq_nvim

Fast as FUCK nvim completion. SQLite, concurrent scheduler, hundreds of hours of optimization.
GNU General Public License v3.0
3.56k stars 101 forks source link

Clarification: Are Tag completions based on tag files or dynamically generated from open buffers? #558

Open wspurgin opened 1 year ago

wspurgin commented 1 year ago

Based on #477 and other issues related to tag based completions I was under the impression that Tag based completion leveraged existing tag based conventions (like vims tags options - see #386).

However looking through tags.parse and the tag worker.. I think I'm reading that tags are only generated based on open buffers?

I think that's a very important distinction and I definitely jumped to "it uses/parse tag files"

So I guess my questions are: 1) Did I read that all correctly? Are Tag based completions dynamically generated into sqlite from only open buffers?

2) ... while its certainly faster that way I don't get to use any of the tags I've generated offline (I add tags from dependencies that way too). Is there a way to customize the ctag command coq runs (or at least add paths beyond open buffers?)

If the answer to (2) is "no" - how do you feel about an API or extension that could append to the tags DB? My ctags get generated "offline" (using githooks and other "triggers"), I think I could use those to effectively append more tags into the tags sqlite db for coq and achieve the same effect.. provide there was some way to do that.

In the mean time, I still can use the builtin tag support to pull just search the tags I've generated so it's not a blocker, but it definitely made coq a little less immediately "pluggable" into my ecosystem without going fully to LSP

vvaltchev commented 1 year ago

Hello @ms-jpq, @wspurgin, I share the exact same use case as @wspurgin. I'd like to use coq_nvim for a large code base (hundreds of thousands of files), where an LSP (with dedicated compile_commands.json per file) is out of question. Therefore, I'm using universal-ctags and that's good enough to search for definitions etc., but it doesn't come with auto-completion. Now, coq_nvim is amazing and fast because it uses an SQLite DB but, it seem to populate the DB dynamically every time a file is opened and.. it's unreasonable for me to open hundreds of thousands of files just to fill the DB.

There seem to be no way to pre-populate the DB using a pre-generated tags file. I'm sure the neovim finds the tags file, because :ts works perfectly. It would be amazing if coq_nvim could do the same. I'd be happy even if I had to do that manually off-line with a script etc. It doesn't need to be on-the-fly and fast. I just would be great to have auto-completion based on all the tags.

Thank you!

rajaravivarma-r commented 6 months ago

The share the confusion, I'm using ripper-tags for generating tags for my Rails project and there were no completion whatsoever, thanks for digging in an clearing this up.

rajaravivarma-r commented 6 months ago

I put together a custom source script that reads from tags file (that should be directly under vim's pwd) and shows in autocomplete under the source name TagsFile and as type Text.

It is in rudimentary stage, so don't blame me if it breaks,

You can source it directly from neovim or put it in init.vim as source file_path.lua or require('file_path') from your init.lua

-- `COQsources` is a global registry of sources
COQsources = COQsources or {}

COQsources["dd0e3499-7dde-4781-ab52-a7e894da7f7e"] = {
  name = "TagsFile", -- this is displayed to the client
  fn = function(args, callback)
    local items = {}

    -- label      :: display label
    -- insertText :: string | null, default to `label` if null
    -- kind       :: int ∈ `vim.lsp.protocol.CompletionItemKind`
    -- detail     :: doc popup

    local file = io.open("tags", "r")

    -- Make sure the file exists
    if not file then
      print("The file does not exist.")
      return
    end

    -- Read and skip the first two lines
    file:read()
    file:read()

    -- Iterate over each line in the file
    for line in file:lines() do
      local tag_name, file_path, short_def, tag_type, parent_class

      -- Capture the required parts using gmatch
      for first_part, path, definition, type_info, class_info in string.gmatch(line, "(%S+)%s+(%S+)%s+(%S+)%s+(%S+)%s+(%S+)") do
          tag_name = first_part
          file_path = path
          short_def = definition:gsub("/^  ", ""):gsub("$/","")
          tag_type = type_info:gsub(";", "")
          parent_class = class_info:gsub("class:", "")
      end

      -- print(tag_name)         -- prints: <<
      -- print(file_path)    -- prints: app/migration/utils/logger.rb
      -- print(short_def)    -- prints: def <<(msg)
      -- print(tag_type)         -- prints: f
      -- print(parent_class)    -- prints: Utils.Logger

      -- Get the first word and the remaining words
      local detail = tag_name .. " [" .. tag_type .. "] " .. "\n" .. file_path .. "\n" .. short_def .. "\n" .. parent_class
      local item = {
        label = tag_name .. " " .. " [" .. tag_type .. "] " .. " " .. parent_class,
        insertText = tag_name, -- defaults to label if null
        kind = vim.lsp.protocol.CompletionItemKind.Text,
        detail = detail
      }
      table.insert(items, item)
    end

    -- Close the file
    file:close()

    callback {
      isIncomplete = true,
      -- isIncomplete = true -> **no caching**
      -- You probably want caching
      items = items
    }
  end,
  resolve = function()
  end,
  exec = function(...)
  end
}