natebosch / vim-lsc

A vim plugin for communicating with a language server
BSD 3-Clause "New" or "Revised" License
695 stars 79 forks source link

Is it possible to improve slight completion stutter? Performance vs native Neovim LSP. #291

Closed bluz71 closed 4 years ago

bluz71 commented 4 years ago

Hello again Nate,

I have been playing with a nightly of Neovim's inbuilt LSP augmented by the completion.nvim plugin. LSP is not in stable Neovim yet, only in the nightlys.

LSC is clearly more mature, easier to configure, and with far less floating window issues.

One thing I do notice on my very slow Core-M powered Macbook 12" is that with Flutter / Dart LSP completion typing never stutters and it can with LSC (very slightly).

For example, if I type Mate (which expands to Material in Flutter API) at a deliberate not fast not slow manner, after the third character I get a slight pause in LSC (the trigger point to gather candidates) whilst with Neovim's LSP + completion.nvim I do not get any pause (even as it gathers and displays candidates).

I know that LSC uses async API's as best it can, but there is still a tiny pause as I type the fourth character of Mate that I don't get in Neovim's LSP + completion.nvim.

I know that you are dipping your toes in Lua recently, and performance in general.

You may or may not see fit to explore this. I leave that to you. Don't hesitate to close if this is an area not worth exploring.

Here is my Neovim LSP setup if you are interested in exploring:

call plug#begin('~/.vim/plugged')
Plug 'neovim/nvim-lsp'
Plug 'haorenW1025/completion-nvim'
call plug#end()

nmap gd  <cmd>lua vim.lsp.buf.definition()<CR>
nmap K   <cmd>lua vim.lsp.buf.hover()<CR>
nmap gr  <cmd>lua vim.lsp.buf.references()<CR>
nmap gR  <cmd>lua vim.lsp.buf.rename()<CR>
nmap ga  <cmd>lua vim.lsp.buf.code_action()<CR>

lua << EOF
    local nvim_lsp = require'nvim_lsp'
    local completion = require'completion'

    nvim_lsp.dartls.setup{
        cmd = { "dart", "/usr/local/flutter/bin/cache/dart-sdk/bin/snapshots/analysis_server.dart.snapshot", "--lsp" },
        on_attach =completion.on_attach
    }

    vim.lsp.callbacks["textDocument/publishDiagnostics"] = function() end
EOF

The faster the machine, the less LSC stutter I see. I pretty much get no pauses on my quad-core i5 Linux desktop.

bluz71 commented 4 years ago

Hello Nate,

I wonder if it worth experimenting to see the cost of completion list processing cost vs IPC cost.

If the slight stutter, on slow machines, is IPC (talking to the language server) then maybe async/debouncing may the item of interest.

However, if it is list processing/sorting then maybe Lua'izing that may be worthwhile.

I suspect it is more the former, and maybe there isn't much that can be done. I may try and find out what haorenW1025/completion-nvim is doing to get a smoother experience, maybe a different debounce value?

If you can, can you tell me were to comment out the list processing in the LSC completion handling?

Again, on a modern fast machine LSC seems nice and smooth (aka my desktop Linux box). On low end hardware there is a slight, and I mean slight, stutter compared with the Neovim native + completion-nvim solution.

natebosch commented 4 years ago

There are a couple things that could contribute here.

When we request completions we always flush changes: https://github.com/natebosch/vim-lsc/blob/baa44d450043cc020e1d6f0609d0e081bbcc6f9e/autoload/lsc/complete.vim#L92

We don't have a debounce there, as soon it wants to trigger a completion it flushes changes, going through the diff algorithm.

Once we have the suggestions from the server, there are 2 main pieces of work.

Translate the representation to something vim friendly: https://github.com/natebosch/vim-lsc/blob/baa44d450043cc020e1d6f0609d0e081bbcc6f9e/autoload/lsc/complete.vim#L232-L248

Filter and sort the completions

https://github.com/natebosch/vim-lsc/blob/baa44d450043cc020e1d6f0609d0e081bbcc6f9e/autoload/lsc/complete.vim#L205-L221

The "sort" in this case is just collecting them in buckets based on the type of match, but I do think this is the most likely cause of slowness.

natebosch commented 4 years ago

294 drops FilterAndSort. I'm not sure that it makes things any better for me, at least in the case where the "filter" was doing a lot. The Dart analysis server sends a lot of completions that don't match by prefix, so without the filter I think it spends more time elsewhere.

It's possible that implementing parts of this in lua could help. That could also make it feasible to introduce fuzziness in the matching instead of strict prefix or substring matches...

bluz71 commented 4 years ago

Good details Nate.

I see you have a testing branch ready to go. I will report findings soon (once my Macbook finishes it slow update).

bluz71 commented 4 years ago

Hi Nate,

disable-completion-prefix-check branch has the same stutter.

We can eliminate that as a cause.

bluz71 commented 4 years ago

Hi Nate,

I wonder, is the initial rendering of the completion menu a synchronous act?

I assume it is just the standard Vim completion popup menu, not a new style Vim popup window or Neovim floating window?

Playing with the Dart language server it really does feel like the stutter is between the time the completion menu is rendering into existence and myself still typing characters (at a certain pace). Once the completion popup is on-screen I don't get the stutters no matter the IPC, filtering or my typing pace (so it seems).

Just a theory, maybe the act of displaying the menu on screen the first time pauses the editor from asynchronously displaying ongoing keyboard input.

I switched over to Neovim LSP + completion-nvim and the completion popup never causes keyboard stutters and pauses. Maybe it is a floating window OR maybe it is asynchronously rendered and updated?

A useful way to test that theory would be a test branch were the completion menu is always rendered (even with a fake first entry); that way it is on the screen all the time. That should prove or disprove my current theory.

bluz71 commented 4 years ago

Hi again,

I just tested out vim-lsp + asyncomplete.vim and observed similar stutter behaviour, if anything actually a little worse.

Source being a Flutter Dart file with the Dart Language Server run on a very slow Core-M Macbook.

And, if I trust my eyes, I observe the same pause point, that being the moment when the completion renders into existence. Once the completion popup is on-screen, no pauses or stutters as best as I can tell.

I wonder if native LSP in Neovim + completion-nvim is rendering a floating window instead of a completion popup? Or maybe it is asynchronously being rendered? In LSC and vim-lsp it really does appear that rendering the completion popup blocks the main thread pausing keyboard input.

Again, I can't yet confirm the theory yet. More research needed, I could be wrong.

If we do figure and solve this issue, big if, then we should also let the vim-lsp know as well since their stutter is even more egregious.

Yes, I am now greedy, I do want that silky smooth experience I get in Neovim's native LSP client in LSC itself.

I will test out the Completor plugin next.

prabirshrestha commented 4 years ago

Here is my investigations on perf for complete. The issue is that it takes forever to loop 2000 items and then convert lsp items to vim complete items. https://github.com/prabirshrestha/vim-lsp/issues/823

Vim-lsp calculates certain same things for every item which causes it to be more slow then others.

bluz71 commented 4 years ago

@prabirshrestha,

Nice to chat, thanks for dropping by, I appreciate it.

Would that processing still happen once the completion menu is on-screen? My understanding is yes.

It seems to me that once the completion menu is actually on the screen, I don't get stutters as the Flutter API matches are being filtered depending on the prefix being entered.

With vim-lsp the menu can appear after the first character is typed, or second, or third etc. I varied my typing rate to trigger the completion on the first, then second, then third and the pause is always the same, the point when the menu pops into existence; from then on filtering does not seem to pause. So I am not sure if list processing is the cause of the stutter?

Again, I am using a ridiculously slow Macbook using the Dart Language Server which is also a tad slow as well. So it is a good worst case scenario.

Even with all that, I can use LSC with Dart on my machine just fine, I have since I starting using LSP. The stutter is about 300ms, if I was to guess. I only raised this issue since the native Neovim LSP completion was buttery smooth; and it would be nice if both LSC and vim-lsp were also as smooth.

P.S. Neovim's Native LSP has other issues that make it not good to use now. Very green, weird floating window issues.

natebosch commented 4 years ago

I wonder, is the initial rendering of the completion menu a synchronous act?

I assume it is just the standard Vim completion popup menu, not a new style Vim popup window or Neovim floating window?

Yeah, this is using [:help complete()](http://vimdoc.sourceforge.net/htmldoc/eval.html#complete()) so it uses the normal vim completion popup menu. Trying to use a custom menu would likely push the complexity too high since we'd need to implement all the other completion behavior too. I don't think this is likely the bottleneck.

Once the completion popup is on-screen, no pauses or stutters as best as I can tell.

The problem is most likely the synchronous vimscript doing all the work it needs before it calls complete(), so that would make sense that it seems like the delay is right in the leadup to when that menu shows.

Would that processing still happen once the completion menu is on-screen? My understanding is yes.

Not in the case of vim-lsc. The complete() call only happens once, and then from there it is all native vim behavior. (This won't hold true if I ever get around to trying out fuzzy completion like YCM - there we'd need to inject the plugin back into the process to allow more items than vim shows because it wants prefix matches, not fuzzy matches, I'm not exactly sure how that would work, but since YCM is doing it it should be possible).

As of today, once the menu is on screen the plugin isn't doing any work.

The issue is that it takes forever to loop 2000 items and then convert lsp items to vim complete items.

This does highlight something interesting - I'm not sure if vim-lsp does any filtering but if it does you might be able to do a similar fix. Your comment did make me realize that vim-lsc is wasting a bunch of time in all the cases where there is filtering happening. If it filters first it wouldn't need to spend as much time on the items that get thrown away anyway.

I've got an attempt at filtering before converting in https://github.com/natebosch/vim-lsc/pull/298

@bluz71 can you try it out and see if you notice any difference?

natebosch commented 4 years ago

@bluz71 a side note specific to Dart, vim-lsc-dart enables machine learning completion suggestions by default which could both make the analysis server a little slower, and cause it to send more suggestions than it would otherwise. I recently added support for disabling this, if you are on the latest version of that plugin you can do let g:lsc_dart_enable_completion_ml = v:false in your vimrc and see if that also helps at all.

bluz71 commented 4 years ago

Hello Nate,

The problem is most likely the synchronous vimscript doing all the work it needs before it calls complete(), so that would make sense that it seems like the delay is right in the leadup to when that menu shows.

You and Prabir are probably right. It just really seems that once the popup is on-screen it is smooth sailing.

I need to test again with the javascript-typescript-langserver since that is a fast language server. Let's see if I can make it stutter.

Crudely, we could also just cut out that pre-processing, aka me commenting out a big block, and see how that goes.

I've got an attempt at filtering before converting in #298

@bluz71 can you try it out and see if you notice any difference?

Yes, I will test soon and let you know.

a side note specific to Dart, vim-lsc-dart enables machine learning completion suggestions by default which could both make the analysis server a little slower

I use my own Dart configuration for Dart, Ruby and JavaScript as follows:

let g:lsc_server_commands = {
 \  'dart': {
 \    'command': 'dart $DART_SDK/bin/snapshots/analysis_server.dart.snapshot --lsp',
 \    'log_level': -1,
 \    'suppress_stderr': v:true,
 \  },
 \  'ruby': {
 \    'command': 'solargraph stdio',
 \    'log_level': -1,
 \    'suppress_stderr': v:true,
 \  },
 \  'javascript': {
 \    'command': 'typescript-language-server --stdio',
 \    'log_level': -1,
 \    'suppress_stderr': v:true,
 \  }
 \}

I experimented with machine learning mode, and it was simply too slow and RAM-consuming on my little 1.1GHz Core-M Macbook. Normal LSP mode for the Dart Analysis Server already does a great job. I also doubt anyone is using Dart LSP on a slower machine :)

Again, I want to emphasise that LSC is more on the fast side than slow side; there is no significant issue issue, more a micro issue. If I hadn't compared to an alternative LSP implementation, I would have just lived with it; the evils of comparison. Also, your recent Lua optimization had me in a performance thinking mode, yes, I experimented with Neovim native due to your Lua work (since I was intrigued).

P.S. Your simple configuration for LSC is bang on the money. Neovim's native LSP implementation throws you in the Lua config deep-end, I ended up drowning :)

bluz71 commented 4 years ago

Ok Nate,

A couple non-scientific data points.

On a fast machine master was already mostly good (my quad core i5 Linux machine). On the same fast machine filter-suggestions-first is smooth as silk with Dart LSP.

I think you fixed the biggest part stutter. It's up to you to see if it is worth extracting, if possible, that last drop of performance.

The other way to tackle the issue is to make Dart Analysis Server a bit less chatty?

bluz71 commented 4 years ago

I did a quick hack of the CompletionItems function in autoload/lsc/complete.vim; I wanted to see how many completion_items we are dealing with when using the typescript-language-server and Dart Analysis Server. Basically I added a len(completion_items) after the map (in master branch).

Candidate insert text: ret wanting to complete to just return

Even the typescript-language-server ends up sending a pretty big list, but the Dart list is 5 times larger.

A list of nearly 6,000 items sounds like a non-optimal recipe for pure Vimscript.

I don't know if this is useful data, heck you probably already knew this.

natebosch commented 4 years ago

I think you fixed the biggest part stutter. It's up to you to see if it is worth extracting, if possible, that last drop of performance.

Awesome! I'll try to get that PR in soon. I'm not sure if I want to wait for adding tests.

I think we're probably at the point of diminishing returns to optimize further in the client unless/until I have a chance to dig in on fuzzy completion so I'll close this issue when I submit that PR.

The Dart server is pretty aggressive about suggestions. There is some discussion at https://github.com/dart-lang/sdk/issues/40026 but no concrete plans to send fewer items that I'm aware. One thing that could help though, the Dart server sends much fewer suggestions following a . since it limits it to the valid things for that type, while a completion for a word on it's own has many many more valid options and it doesn't filter by characters typed so far (which is why we filter on the client side). We could disable the auto complete for words on their own - you'd still be able to trigger completion manually. I filed https://github.com/natebosch/vim-lsc/issues/299 to discuss and track. Let me know over on that issue if you think it would be helpful for you.

bluz71 commented 4 years ago

Hi Nate,

Best regards.

bluz71 commented 4 years ago

So in summary, the performance comparison when completing using LSC or Neovim LSP augmented with completion.nvim basically boils down to list processing speed differential between VimScript and LuaJIT.

With language servers that send small list sizes (say well less than 2,000 items) there is no tangible difference. Getting up towards 6,000 items things start to get dicey with VimScript on slower machines.

My comment here is probably also applicable, that being that LuaJIT is really damn fast.

bluz71 commented 4 years ago

Oh dear, I just upgrade Flutter from v1.12.13 to v1.17.3 and now completions are very stuttery/laggy on my fast i5 desktop.

This can't be right? Latest Dart Language Server is now sending 5 times as many items as before which itself was probably 5 times too many to begin with?

In this LSC code:

function! s:CompletionItems(completion_result) abort
  let completion_items = []
  if type(a:completion_result) == type([])
    let completion_items = a:completion_result
  elseif type(a:completion_result) == type({})
    let completion_items = a:completion_result.items
  endif
  call map(completion_items, {_, item -> s:CompletionItem(item)})
  let completion = {'items' : completion_items}
  for item in completion_items
    if has_key(item, 'start_col')
      let completion.start_col = item.start_col
      break
    endif
  endfor
  return completion
endfunction

I assume completion items are candidate text strings that the Language Server thinks may be relevant?

So is the expectation here that LSP clients must always do the filtering; more text from the server implies more filtering needed in the client?

If so, with the latest Dart language server we are now dealing with 26,000 completion items which is bringing Vim to it knees even on a quad-core desktop.

prabirshrestha commented 4 years ago

I improved a bit perf for vim-lsp. You need https://github.com/prabirshrestha/vim-lsp/pull/835 and latest asyncomplete-lsp.vim.

For vim-lsp I would like to avoid filtering out but others can easily create plugins that can filter certain things out since. https://github.com/machakann/asyncomplete-ezfilter.vim https://github.com/tsufeki/asyncomplete-fuzzy-match

I'm not familiar with dart but one thing I saw in typescript lang server was that it will show all possible global vars such as (window/console) as well as completions that cannot exist but when you complete it it will auto import. So one way to reduce might be to disable code actions and see if that helps?

One thing I had thought of to improve was that if the completion count is greater than X, only convert partial results but also set incomplete = true so next char key will call completion results again.

What we really needs is wasm in vim so we can use threads :) https://twitter.com/PrabirShrestha/status/1239023903235686401 and https://github.com/prabirshrestha/vim/tree/wasm3

It will be interesting to see how lua performs here. But it does require converring quite a lot of chunk of vimscript to lua script at least in vim-lsp. https://github.com/prabirshrestha/vim-lsp/blob/052d26fa1846300339313114060dcc710c8f4670/autoload/lsp/omni.vim#L258-L332. This would mean double maintainace :(

bluz71 commented 4 years ago

@prabirshrestha,

I will give your latest versions a test on my slow laptop (which still uses the older less verbose Dart Language Server) this evening and feedback the results in this issue. Stay tuned.

tsserver appears to be sending a lot less candidates than Dart, 1,200 vs 5600/26,000. JavaScript completion for my simple React projects is excellent. I suspect the TypeScript language server is one the most performant available.

One thing I had thought of to improve was that if the completion count is greater than X, only convert partial results but also set incomplete = true so next char key will call completion results again.

Does that exchange push filtering server side? Say I type ret (short for return), with the above logic does the LSP client send ret to the server and then it filters back a smaller relevant set of matches? The smaller the list that the language server sends the better for VimScript list processing.

It definitely feels like more than 1,500 candidates starts to cause issues. On my slow laptop 5,600 is pretty bad (but not on my desktop). 26,000 plus candidates is absolutely usable on my desktop, about a 2 second lag, on my laptop if I upgraded Flutter, who knows maybe even a 10 second lag as I type letters (just a guess since I won't upgrade Flutter).

It will be interesting to see how lua performs here. But it does require converring quite a lot of chunk of vimscript to lua script

I don't know if Nate is planning to do fuzzy matching in Lua or VimScript. I assumed Lua, but your post makes me nervous about the doubled up code path and maintenance burden. Then again, LuaJIT in Neovim is up to 2,000 times faster than VimScript. If LSC stays in pure VimScript than list processing needs to be bounded to minimize latency.

It will be an interesting conversation.

What I do know is that Dart LSP is pushing pure VimScript LSP clients to the extreme.

bluz71 commented 4 years ago

@prabirshrestha,

I improved a bit perf for vim-lsp. You need prabirshrestha/vim-lsp#835 and latest asyncomplete-lsp.vim.

I don't have benchmarking tools, so all I can judge is on feel and visuals.

The perf-get-omnni-completion-items seems a bit better than master. Using numbers out of thin air, master feels like it has 800ms stutter and perf-get-omnni-completion-items feels like it has a 600ms stutter.

LSC with and without Nate's latest enhancement feels snappier. No doubt you are doing more work especially on the diagnostic side; I turn all LSC diagnostics off, I use ALE instead.

Dart Language Server is sending a lot of data over the wire. My gut instinct is that it is sending way too much data. I think this suggestion of yours is worth contemplating:

One thing I had thought of to improve was that if the completion count is greater than X, only convert partial results but also set incomplete = true so next char key will call completion results again.

Best regards.

prabirshrestha commented 4 years ago

vim includes profiler.

:profile start profile.log
:profile func *
:quit

Do you have a sample dart repo I can try? I haven't used dart before.

bluz71 commented 4 years ago

@prabirshrestha,

Profile 1, master branch:

FUNCTIONS SORTED ON TOTAL TIME
count  total (s)   self (s)  function
  128   3.049541   0.006727  <SNR>162_on_stdout()
  128   3.042815   0.310671  <SNR>161_on_stdout()
    2   2.703529   0.000184  <SNR>84_handle_completion()
    2   2.702487   0.230760  lsp#omni#get_vim_completion_items()
11249   2.471727   0.210586  lsp#omni#get_vim_completion_item()
11249   2.261141   1.841535  lsp#omni#default_get_vim_completion_item()
   15   1.272256   0.007046  <SNR>145_recompute_pum()
   15   1.258250   1.244765  <SNR>145_default_preprocessor()
   94   0.731074   0.000691  <SNR>160_next()
   73   0.520560   0.000179  <SNR>160_callback()
11249   0.371560   0.325156  <SNR>163_create_user_data()
   21   0.211942   0.000130  lsp#utils#step#start()
   13   0.146393   0.001032  <SNR>152_ensure_flush()
   13   0.143449   0.001016  <SNR>152_ensure_start()
   13   0.136070   0.000433  <SNR>152_ensure_init()
   13   0.132453   0.000538  <SNR>152_ensure_conf()
   13   0.128766   0.001010  <SNR>152_ensure_open()
   13   0.101145   0.015624  <SNR>152_ensure_changed()
    2   0.080890   0.000258  AutoSave()
    1   0.080481   0.004167  DoSave()

FUNCTIONS SORTED ON SELF TIME
count  total (s)   self (s)  function
11249   2.261141   1.841535  lsp#omni#default_get_vim_completion_item()
   15   1.258250   1.244765  <SNR>145_default_preprocessor()
11249   0.371560   0.325156  <SNR>163_create_user_data()
  128   3.042815   0.310671  <SNR>161_on_stdout()
    2   2.702487   0.230760  lsp#omni#get_vim_completion_items()
11249   2.471727   0.210586  lsp#omni#get_vim_completion_item()
   41   0.055542   0.053719  <SNR>158_encode_uri()
    1   0.048732   0.048717  <SNR>145_on_insert_leave()
11249              0.046404  <SNR>163_create_user_data_key()
11249              0.046307  lsp#utils#_trim()
   15   0.061809   0.024264  <SNR>145_on_change()
    6   0.023068   0.022320  gitgutter#async#execute()
   13   0.101145   0.015624  <SNR>152_ensure_changed()
  175   0.018454   0.014806  MoonflyActiveStatusLine()
  175              0.013290  ObsessionStatus()
   15   0.013484   0.010986  asyncomplete#preprocess_complete()
   39              0.009979  asyncomplete#context()
   30              0.008930  <SNR>87_abs_path()
  175   0.014743   0.008897  <SNR>166_GetCounts()
   15   1.272256   0.007046  <SNR>145_recompute_pum()

Profile 1, perf-get-omnni-completion-items branch:

FUNCTIONS SORTED ON TOTAL TIME
count  total (s)   self (s)  function
  139   2.802449   0.008733  <SNR>162_on_stdout()
  139   2.793716   0.302474  <SNR>161_on_stdout()
    2   2.457827   0.000170  <SNR>84_handle_completion()
    2   2.456899   0.237656  lsp#omni#get_vim_completion_items()
11249   2.219201   1.810996  <SNR>163_get_vim_completion_item()
   14   1.172933   0.007402  <SNR>145_recompute_pum()
   14   1.158770   1.146942  <SNR>145_default_preprocessor()
   94   0.685620   0.000715  <SNR>160_next()
   73   0.488437   0.000192  <SNR>160_callback()
11249   0.361058   0.316452  <SNR>163_create_user_data()
   21   0.198623   0.000108  lsp#utils#step#start()
   13   0.137317   0.000626  <SNR>152_ensure_flush()
   13   0.134760   0.000899  <SNR>152_ensure_start()
   13   0.126804   0.000613  <SNR>152_ensure_init()
   13   0.123144   0.000489  <SNR>152_ensure_conf()
   13   0.119776   0.000692  <SNR>152_ensure_open()
   13   0.096777   0.014412  <SNR>152_ensure_changed()
    2   0.078387   0.000278  AutoSave()
    1   0.077961   0.004329  DoSave()
   14   0.064039   0.000874  <SNR>146_maybe_notify_on_change()

FUNCTIONS SORTED ON SELF TIME
count  total (s)   self (s)  function
11249   2.219201   1.810996  <SNR>163_get_vim_completion_item()
   14   1.158770   1.146942  <SNR>145_default_preprocessor()
11249   0.361058   0.316452  <SNR>163_create_user_data()
  139   2.793716   0.302474  <SNR>161_on_stdout()
    2   2.456899   0.237656  lsp#omni#get_vim_completion_items()
   41   0.054174   0.052785  <SNR>158_encode_uri()
11249              0.045230  lsp#utils#_trim()
11249              0.044605  <SNR>163_create_user_data_key()
    1   0.042033   0.042023  <SNR>145_on_insert_leave()
   14   0.063165   0.025992  <SNR>145_on_change()
    7   0.021637   0.021055  gitgutter#async#execute()
  201   0.019751   0.016166  MoonflyActiveStatusLine()
   13   0.096777   0.014412  <SNR>152_ensure_changed()
  201              0.013861  ObsessionStatus()
   35              0.010392  <SNR>87_abs_path()
  201   0.016213   0.010135  <SNR>166_GetCounts()
   14   0.011828   0.009535  asyncomplete#preprocess_complete()
   36              0.009206  asyncomplete#context()
  139   2.802449   0.008733  <SNR>162_on_stdout()
   14   1.172933   0.007402  <SNR>145_recompute_pum()

Do you have a sample dart repo I can try? I haven't used dart before.

I recommend these steps:

au User lsp_setup call lsp#register_server({
        \ 'name': 'dartls',
        \ 'cmd': {server_info->['dart', '/usr/local/flutter/bin/cache/dart-sdk/bin/snapshots/analysis_server.dart.snapshot', '--lsp']},
        \ 'whitelist': ['dart'],
        \ })

One quirk is that it appears the newest Dart Analysis Server is a lot more verbose than the old one (aka slower by a lot). I am using the older one on my laptop.

Fundamentally as you said a while ago this is a list processing issue AND possibly a Dart LSP too verbose issue.

It is probably why the YCM crowd use Python and the Neovim guys use LuaJIT.

bluz71 commented 4 years ago

@natebosch,

I am back to looking at Flutter completion slowness that I noted yesterday, Flutter 1.12 vs 1.17.

I decided to add some debugging to autoload/lsc/completion.vim (the redir captures):

function! s:CompletionItems(completion_result) abort
  let completion_items = []
  if type(a:completion_result) == type([])
    let completion_items = a:completion_result
  elseif type(a:completion_result) == type({})
    let completion_items = a:completion_result.items
  endif
  call map(completion_items, {_, item -> s:CompletionItem(item)})
redir @a
echo completion_items
redir END
  let completion = {'items' : completion_items}
  echo len(completion_items)
  for item in completion_items
    if has_key(item, 'start_col')
      let completion.start_col = item.start_col
      break
    endif
  endfor
  return completion
endfunction

completion_items looks to be a list, it contains about 5,630 elements with Flutter 1.12 and 26,789 elements with Flutter 1.17.

Here are the completion_items for each whilst I was completing the term ret:

lsc_flutter_1_12.log

lsc_flutter_1_17.log

Interesting.

Do I interpret this correctly, each word is a element in the completion_items list? I don't understand how len(completion_items) gives 26,789? Is len not length but is actually memory size?

For comparison here is the completion_items when completing JavaScript (in a React project):

lsc_javascript.log

And lastly Ruby in a Rails project:

lsc_ruby.log

The JavaScript and Ruby logs kind-of make sense to me. I don't know what is happening in Dart land, and maybe my crude debugging is in the wrong spot and is not actually what I think it is.

But it is worth looking into since it does appear to be exhibiting poor performance in LSC. Maybe that poor performance is elsewhere, JSON-decoding perhaps?

Best regards.

natebosch commented 4 years ago

Do I interpret this correctly, each word is a element in the completion_items list? I don't understand how len(completion_items) gives 26,789? Is len not length but is actually memory size?

Yes, len(completion_items) should give the number of suggestions that the server is sending.

It is indeed very confusing to see so many items coming from the server. It looks like those .log files are not complete (wow it looks like flutter has some long docs on some of those items though!).

The instrumentation log files from the linked issue in the SDK do look like they should have enough info to debug this - let's follow up over on that issue.

bluz71 commented 4 years ago

It looks like those .log files are not complete

Yes, there does appear to be a massive truncation going on; it is hiding something.

I will use a for loop instead and log all the completions that way. I want to see each completion.

My gut instinct, something seems wrong. 27,000 completions feels completely wrong. I wonder if there are many duplicates or whether LSC inadvertently is double or triple appending?

I will keep the Vim and LSC discussion here.

If we ascertain that the Analysis Server is amiss then focus should shift to the linked Dart SDK issue. Noting that I added some logging over there that was requested.

Stay tuned I will update here soon.

natebosch commented 4 years ago

See my latest comment there: https://github.com/dart-lang/sdk/issues/40026#issuecomment-641738403

14,801 of those completions appear to be duplicates.

bluz71 commented 4 years ago

Ok here is the full completion_items list (no truncation):

lsc_flutter_1_17-2.log

I need to study this list. Hmmm, maybe Dart Analysis Server is legit sending this; it could be the entire surface for every type in Flutter.

But no doubt it is beyond VimScript's ability to handle efficiently; iterating 27,000 items will bring auto-completion to its knees.

bluz71 commented 4 years ago

We are now deep down the rabbit hole. I did not expect to end up here when I opened this. But I have a feeling good will come of this.

Nate, have you given any thought to @prabirshrestha's suggestion about a hard-cap; pulling a number out of thin air, say 1,500? His idea:

One thing I had thought of to improve was that if the completion count is greater than X, only convert partial results but also set incomplete = true so next char key will call completion results again.

We may be able to tame Dart's Language Server (maybe, maybe not); but it is the wild west out there; who knows what some other Language Server's do.

What we do know is that too many completion items will cause stutter and in the worst case scenario outright pauses.

Ideally I want LSC never to pause, heck I don't even want stutters if possible.

Best regards.

bluz71 commented 4 years ago

@prabirshrestha and @natebosch ,

Do you know if an LSP client can feed prefix suggestions to a language server for it to filter server side?

In LSC's case auto-completion for terms happens after a user has typed a third character, hence it seems baffling that 27,000 completion suggestions are sent over the wire if I only care about choices that start with the 3 character prefix I already typed (e.g Mat wanting to expand to MaterialApp).

And these 27,000 suggestions are sent every time I type 3 character terms, which is often, very often.

Not having read the Language Server spec, I am in the dark.

bluz71 commented 4 years ago

Do you know if an LSP client can feed prefix suggestions to a language server for it to filter server side?

Well it seems the Dart SDK team will indeed be implementing server-side prefix filtering as noted here.

This will be of great benefit. I strongly suspect it will entirely eliminate the stutter I note in this issue (along with Nate's recent optimisation).

It would be nice to have LSC and vim-lsp both should support this server-side filtering.

Hopefully Dart's implementation is applicable / adheres to existing LSP spec and that other language servers may already, or in future, support that same mode of operation.

More detail needed from the Dart team.

Lastly, this will be of no benefit for autocompletions that start after a space. I personally see no value, and plenty of negatives, for completions after a space (aka no prefix). LSC defaults to autocompleting after three characters; now more than ever that is a very sensible default.

bluz71 commented 4 years ago

@prabirshrestha,

One thing I had thought of to improve was that if the completion count is greater than X, only convert partial results but also set incomplete = true so next char key will call completion results again.

@DanTup's opinion

That only works with IsIncomplete=true though, and that results in a lot of data being round-tripped multiple times and increased latency as the user types. That's why I'd prefer to filter to the prefix (if this can be clarified) than switch to IsIncomplete=true.

My interpretation, he prefer's server side filtering to obtain a manageable of relevant candidate completions and then use client-side filtering to refine the completions as the user types more characters. Only one data exchange between client and server.

DanTup commented 4 years ago

My interpretation, he prefer's server side filtering to obtain a manageable of relevant candidate completions and then use client-side filtering to refine the completions as the user types more characters

My current feeling is that it's better to have the client start filtering once it has the initial list. This should result in much better filter-as-type performance when the latency between client/server is high (which I think is a growing situation with things like Codespaces, GitPod, VS Code Remoting becoming more common).

If the payload can't be made small enough to prevent stuttering in some editors, using IsIncomplete=true and capping the list might be something to consider (perhaps as an option). I think some of the incoming fixes and the prefix-filtering might be significant enough to be able to avoid that though.

wow it looks like flutter has some long docs on some of those items though!

The full docs should not be transferred in the initial completion list - they should be omitted and then provided by the resolve call. If this doesn't seem to be the case on the latest code (of Flutter master), let me know and I'll do some digging.

natebosch commented 4 years ago

suggestion about a hard-cap; pulling a number out of thin air, say 1,500?

This plugin currently ignores the concept of incomplete suggestion lists and once it suggests completions it won't request any more from the server. This would be interesting to explore if I have a chance to look at supporting incremental completion lists.

Do you know if an LSP client can feed prefix suggestions to a language server for it to filter server side?

Not per the spec - that's why it's up to the server to decide to filter based on the file contents and cursor position. Luckily that's what the Dart server will start doing.

My current feeling is that it's better to have the client start filtering once it has the initial list. This should result in much better filter-as-type performance when the latency between client/server is high (which I think is a growing situation with things like Codespaces, GitPod, VS Code Remoting becoming more common).

If the payload can't be made small enough to prevent stuttering in some editors, using IsIncomplete=true and capping the list might be something to consider (perhaps as an option). I think some of the incoming fixes and the prefix-filtering might be significant enough to be able to avoid that though.

+1

The full docs should not be transferred in the initial completion list - they should be omitted and then provided by the resolve call. If this doesn't seem to be the case on the latest code (of Flutter master), let me know and I'll do some digging.

That requires support for resolving completion items šŸ˜‰ This client doesn't support that, and since it doesn't advertise that support the Dart server correctly sends them with the original request.

We don't have a hook for when the user iterates through the various suggestions, so we can't lazily request docs when they are looking at an item.

bluz71 commented 4 years ago

Nate,

This plugin currently ignores the concept of incomplete suggestion lists and once it suggests completions it won't request any more from the server. This would be interesting to explore if I have a chance to look at supporting incremental completion lists.

I will create a new issue exploring the idea of hard caps and incremental lists. However, before you do anything I will explore a heap of language servers (Go, Rust, C++, Scala, Python, etc) and see what is the medium completion list size each provides.

We know that Dart (pre-fixes and enhancements) was up to 27,000; which is far too much for LSC to handle. Is this a one off, or not?

I suspect it may take me a week to explore this space in languages and frameworks I am not familiar with. But I will pass on my findings in that new issue.

I'd say around 1,000 is a decent upper limit; 5,000 things start going bad and 20,000 is very pear-shaped.

Not per the spec - that's why it's up to the server to decide to filter based on the file contents and cursor position. Luckily that's what the Dart server will start doing.

Your new LSC optimisation and the upcoming Dart Analysis Server enhancement should resolve this particular issue. Hence, I will close this and open a new one about IsIncomplete=true.

That requires support for resolving completion items

It will be up to you; but I personally see little need for resolve. Other clients support it, and it seems laggy to me.

bluz71 commented 4 years ago

That requires support for resolving completion items šŸ˜‰ This client doesn't support that, and since it doesn't advertise that support the Dart server correctly sends them with the original request.

Should this be spawned into a new issue? Aka, make LSC advertise that it does not support resolve.

Since you don't use resolve, the server sending it to you will only make things more bloaty and slow for LSC completions.

DanTup commented 4 years ago

That requires support for resolving completion items šŸ˜‰ This client doesn't support that, and since it doesn't advertise that support the Dart server correctly sends them with the original request.

Oh, I see. I don't think LSP allows a client to say whether it supports this. I think what you're seeing may be that the items with docs are not those that are added as part of SuggestionSets, but rather the original list (we don't support any sort of lazy-resolution for the original set). I suspect if you don't support resolve that you'll be missing docs + the auto-import edits for those items right now.

It will be up to you; but I personally see little need for resolve. Other clients support it, and it seems laggy to me.

This shouldn't introduce lag, it's used only to provide supplementary info (like docs). In VS Code they do allow a resolve request to continue in the background when you complete (but not hold up the complete) so if it responses with additionalEdits they can still be edited (eg. for auto-imports).

bluz71 commented 4 years ago

The lag I see is the Neovim LSP client slowly rendering a secondary floating window attached to the completion window. As I quickly navigate the completion choices the floating window with resolve details struggles to keep up. It looks janky.

Personally I would favour an insert mode mapping to trigger resolve details being displayed in the Vim preview window, which should auto-close if I scroll to the next completion choice.

natebosch commented 4 years ago

Personally I would favour an insert mode mapping to trigger resolve details being displayed in the Vim preview window, which should auto-close if I scroll to the next completion choice.

I had thought this sort of thing wasn't possible in vim, but I clearly overlooked stuff like :help CompleteChanged and :help complete_info()... there may be more room to implement this than I thought... https://github.com/natebosch/vim-lsc/issues/304 will track that.

bluz71 commented 4 years ago

A related update, Dart Analysis Server (master branch) has solved the lag issue entirely.

See the comparison:

Language Language Server Keyword Matches Type Matches
Dart Analysis Server / Flutter 1.12 5,630 5,619
Dart Analysis Server / Flutter 1.17 26,789 26,777
Dart Analysis Server / Dev 2.9.0-19.0 863 572

Much fewer matches now returned. LSC now buttery smooth for Dart code, just like Neovim's in-built Lua based LSP client.

And LSC itself improved via this issue as well, as noted above.

All good.