Open ahmedelgabri opened 4 years ago
That's sad to hear. In fact, I'm also facing the same issue with large c++ file with clangd as the lsp. Not quite sure why the file size will cause the performance drag though but I'll look into that. Right now the temporary solution is using :CompletionToggle
to toggle on and off completion when facing giant file size(shouldn't be often).
I found a bit of bad behavior.
For example, a typical completion plugin sends one request in |
-> console|
case but the completion-nvim send 7 requests for c
o
n
s
o
l
e
.
But this behavior is right if the server returns isIncomplete=true
on the first response.
But a right behaviour doesn't always mean a good experience, maybe it should debounce & batch these requests instead of sending letter by letter
@ahmedelgabri Can you provide the project you mentioned? I've done a little test on my side but my issue seems to be on the clangd side.. Otherwise I would have to push a testing branch and you might need to help me test about that.
@ahmedelgabri Can you provide the project you mentioned? I've done a little test on my side but my issue seems to be on the clangd side.. Otherwise I would have to push a testing branch and you might need to help me test about that.
Unfortunately I can't do that. I'm fine with testing though
Actually @mg979 have been working on a fork of completion-nvim
(link). In which the redundant timers was refactored out and he solve some other issue as well. @ahmedelgabri can you take some time to test if it works for you? If so we just need to start porting things back to completion-nvim
, if not we'll need to spend some more time to clarify what went wrong...
Note that the setting is not compatible with completion-nvim
(make it easier to test), you might have to spend some time reading the docs. Some setting @mg979 suggest is
let g:autocomplete = {'confirm_key': "\<C-Y>"}
let g:autocomplete.chains = {
\ 'vim': ['snippet', 'keyn', 'file', 'cmd'],
\ 'cmake': {'String': ['keyn'], 'default': ['file', 'omni', 'keyn']},
\}
I have tried the fork the good thing that it at least let me type & doesn't block my editor also snippets are shown right away
But it's still very slow & sometimes doesn't show completion at all, here are a couple of gifs on how it works in a typescript.tsx
file where I do have react imported at the top of the file (this is a real file from that repo I talked about but I can't show details sorry)
I intentionally was typing slowly to see how slow it is, but usually I'll type faster than that.
Here is the config that I used, which is also shown on the left side of each gif
This file was added to ~/.config/nvim/after/plugin/autocomplete.vim
autocmd BufEnter * call autocomplete#attach()
let g:autocomplete = get(g:, 'autocomplete', {})
let g:autocomplete.snippets = 'vim-vsnip'
let g:autocomplete.chains = {
\ 'typescript' : [ 'path', ['lsp', 'snippet'], 'keyn' ],
\ 'typescript.tsx' : [ 'path', ['lsp', 'snippet'], 'keyn' ],
\}
imap <Tab> <Plug>(TabComplete)
inoremap <expr> <S-Tab> pumvisible() ? "\<C-p>" : "\<S-Tab>"
imap <localleader>j <Plug>(NextSource)
imap <localleader>k <Plug>(PrevSource)
set completeopt=menuone,noselect
set shortmess+=c
With the default g:autocomplete.chains
:
With a custom g:autocomplete.chains
for typescript
@ahmedelgabri Thanks a lot for the report! I think not having editor stuck is a good new. We'll start porting some key changes back to completion-nvim and see how it goes. I think we should aim for optimizing the completion speed after the lock is disappear.
Edit: Also can you show a gif on using coc.nvim? I'm quite curious about the difference of the two, thanks in advanced!
I will keep it running until the end of the week to get a better understanding & maybe report more issues if it happens
Thanks a lot. Also some changes in the fork are on the improvement of signature help and hover. Can you try using completion-nvim again with hover and signature help off and see if it still sucks? That way I'll have a better understanding on what went wrong. Sorry for keep bothering you:(
No worries, happy to help 😊
I was already testing completion-nvim
now again because I think I found another issue (but this is with my setup) which is vim-gitgutter
, it slowed down vim a lot! So after removing it completion-nvim
because more responsive, a bit slow still & still get the cannot resume dead coroutine
errors but much usable.
So that was interesting… I will most probably get rid of vim-gitgutter
since it has way less value for me than LSP & completion.
Hi, The typescript-language-server does not support isIncomplete
in my knowledge.
But completion-nvim sends a request on each char.
I bisected for the reason. Then I found a e4dddd8e29224c667972fc33a2537a2e7e1e1a4c commit that cause it.
@ahmedelgabri Could you test with 49a2335d2f9e2c15bf597cde555ecad3bdf70663 ?
@ahmedelgabri Could you test with
49a2335
?
I tested that, but there was no major difference.
BTW, I can confirm that disabling vim-gitgutter
or at least disabling realtime updates improved the performance greatly, I have been disabling it for a couple of days already.
Still completion.nvim
is not as fast as coc.nvim
but much better than before.
Ahh okay so I think that's because vim-gitgutter
might also do some timers and async stuff (so does completion-nvim), causing the event loop to stuck with each other therefore slow down neovim a lot. coc-nvim
use remote API so it's still usable in this case.
Nevertheless, refactored should be done. I'll start working on it whenever I have time. @ahmedelgabri I'll keep you update about the progress.
@hrsh7th About the isIncomplete
flag, I think that's worth discussing in another issue. We shouldn't send request on each char if the server doesn't support isIncomplete
.
@ahmedelgabri,
Can you temporarily switch to the LSC plugin? It is a VimScript LSP plugin. Note, it works fine in Neovim, it is not restricted to Vim 8.
This article of mine may help you get setup: https://bluz71.github.io/2019/10/16/lsp-in-vim-with-the-lsc-plugin.html
LSC apparently uses a 500ms denounce along with no support for isIncomplete
.
I am keen to know if there is a difference in performance between Neovim LSP + completion.nvim and LSC. Maybe there is, maybe there isn’t, maybe LSC will be worse. But it will be a data point that could help use understand this issue.
I have opened #231 with regard to debouncing, but that is purely speculative.
Best regards.
@bluz71 I couldn't get the completion to work at all, I used your config, I tried to set g:lsc_auto_map
to v:true
& {'defaults': v:true, 'Completion': 'omnifunc'}
nothing worked. Checking with set omnifunc?
it's always empty. And I don't change that anywhere other than the neovim LSP config which I disabled to test LSC.
But I see that you are using gitgutter
I was using this too & after removing it literally everything in my vim improved, typing became more responsive & completion perf improved too. I'd say start there first.
Author of vim-lsp and asyncomplete.vim here.
debouncing is not enough. There are other smarter ways to fix perf which asyncomplete.vim and ncm uses which is similar to VSCode. I documented my findings of how vscode is performant at https://github.com/roxma/nvim-completion-manager/issues/30#issuecomment-283281158. You could do the same trick for completion-nvim. you might want to read the thread from start to get the best context.
@bluz71 I couldn't get the completion to work at all, I used your config, I tried to set
g:lsc_auto_map
tov:true
&{'defaults': v:true, 'Completion': 'omnifunc'}
nothing worked. Checking withset omnifunc?
it's always empty. And I don't change that anywhere other than the neovim LSP config which I disabled to test LSC.
That is likely an issue with respect the server command not working or not being correct. Which programming language are you using and which language server?
If you have time I am very interested in helping you get LSC working. Note, that is not because I want you to switch LSP clients to LSC, but rather I want the data point of how LSC performance compares with completion.nvim for you. Maybe we can improve the performance of this plugin?
Can you post your LSC config here thanks. My first instinct is that you may need to fully qualify the language server binary and possibly add a --lsp
or --stdio
type flag. I believe it should be an easy fix.
In my case I have Dart/JavaScript and Ruby LSP setups for both LSC and Neovim LSP + completion, here and here.
But I see that you are using
gitgutter
I was using this too & after removing it literally everything in my vim improved, typing became more responsive & completion perf improved too. I'd say start there first.
My assertion in #231 is that I notice completion.nvim uses more CPU than LSC. Hence in your case removing gitgutter helps but it may still be the case the completion.nvim (or the language server it communicates with) may be using more CPU than ideally it should (when compared with LSC). A theory that is to be determined.
As for gitgutter, I notice no issue unless I am dealing with files 200K and greater. I also configure gitgutter to use Ripgrep:
let g:gitgutter_grep = 'rg'
function! signs#Disable() abort
:GitGutterBufferDisable
:ALEDisableBuffer
endfunction
autocmd BufReadPre *
\ if getfsize(expand('%')) > 200000|
\ call signs#Disable()|
\ endif
Anyway, your code base and test case should be really helpful to ascertain if there are inefficiencies in completion.nvim.
Best regards.
Author of vim-lsp and asyncomplete.vim here.
debouncing is not enough. There are other smarter ways to fix perf which asyncomplete.vim and ncm uses which is similar to VSCode. I documented my findings of how vscode is performant at roxma/nvim-completion-manager#30 (comment). You could do the same trick for completion-nvim. you might want to read the thread from start to get the best context.
Hello again @prabirshrestha.
If completion.nvim is sending to the server every character the uses types (which is may not be doing, I am just speculating) then a debounce of 500ms surely could be of some help (LSC uses a 500ms is not worse for that). It may not be the silver bullet.
I do agree that the tips you provide in the linked post are also highlighly worthwhile, maybe even more so than debouncing.
I will copy your post over to #231 since that is the debounce issue.
This one, which may strongly relate, should focus on @ahmedelgabri's original performance issue. If we can get LSC working for him that would provide an interesting comparison. What will be the result? I have learnt with LSP anything is possible, LSC could be faster, or slower or the same.
Cheers.
That is likely an issue with respect the server command not working or not being correct. Which programming language are you using and which language server? […] Can you post your LSC config here thanks. My first instinct is that you may need to fully qualify the language server binary and possibly add a
--lsp
or--stdio
type flag. I believe it should be an easy fix. […] In my case I have Dart/JavaScript and Ruby LSP setups for both LSC and Neovim LSP + completion, here and here.
That's highlight unlikely because the same server works fine when I switch back to neovim LSP, I already added --stdio
, I was testing with typescript LS which is used like this typescript-language-server --stdio
& that's what I had in LSC config & it's already in my $PATH
This was the setup I tested LSC with.
let g:lsc_enable_autocomplete = v:true
let g:lsc_auto_map = v:true
let g:lsc_server_commands = {'typescript': 'typescript-language-server --stdio', 'typescript.tsx': 'typescript-language-server --stdio'}
If you have time I am very interested in helping you get LSC working. Note, that is not because I want you to switch LSP clients to LSC, but rather I want the data point of how LSC performance compares with completion.nvim for you. Maybe we can improve the performance of this plugin?
I understand that 🙂
But I see that you are using
gitgutter
I was using this too & after removing it literally everything in my vim improved, typing became more responsive & completion perf improved too. I'd say start there first.My assertion in #231 is that I notice completion.nvim uses more CPU than LSC. Hence in your case removing gitgutter helps but it may still be the case the completion.nvim (or the language server it communicates with) may be using more CPU than ideally it should (when compared with LSC). A theory that is to be determined.
As for gitgutter, I notice no issue unless I am dealing with files 200K and greater. I also configure gitgutter to use Ripgrep:
let g:gitgutter_grep = 'rg' function! signs#Disable() abort :GitGutterBufferDisable :ALEDisableBuffer endfunction autocmd BufReadPre * \ if getfsize(expand('%')) > 200000| \ call signs#Disable()| \ endif
Thanks will check that, but in my case it was not about filesize because the files I tested on in that repo were not that large yet, it choked really bad. I also had rg
as gitgutter grep but that disable tip based on filesize is a nice one. I will use this anyway for other stuff maybe, thanks for the tip.
Try this instead:
let g:lsc_server_commands = {
\ 'typescript': { 'command': 'typescript-language-server --stdio' },
\ 'typescript.tsx': { 'command': 'typescript-language-server --stdio' }
\ }
The space between typescript-language-server
and --stdio
may require the use of the command
directive. I always use command
as noted here.
I myself use typescript-language-server
with JavaScript and TypeScript code.
Also, LSC automap, from my reading, does not setup omnifunc
. I recommend defining your own mappings, these are mine:
let g:lsc_auto_map = {
\ 'GoToDefinition': 'gd',
\ 'FindReferences': 'gr',
\ 'Rename': 'gR',
\ 'ShowHover': 'K',
\ 'FindCodeActions': 'ga',
\ 'Completion': 'omnifunc',
\}
Best regards.
Ok, I have tested LSC again & it's indeed slightly more responsive than completion-nvim, I can keep typing without any locks or hangs.
I understand that the completion list is affected by the LSP server itself & in this case, I'm not sure anything can be done here because the project is huge & typescript-language-server
is slow.
This is the configuration I used, thanks @bluz71 for the help.
let g:lsc_server_commands = {
\ 'javascript': { 'command': 'typescript-language-server --stdio' },
\ 'javascript.jsx': { 'command': 'typescript-language-server --stdio' },
\ 'typescript': { 'command': 'typescript-language-server --stdio' },
\ 'typescript.tsx': { 'command': 'typescript-language-server --stdio' }
\ }
let g:lsc_auto_map = {
\ 'GoToDefinition': 'gd',
\ 'FindReferences': 'gr',
\ 'Rename': 'gR',
\ 'ShowHover': 'K',
\ 'FindCodeActions': 'ga',
\ 'Completion': 'omnifunc',
\}
I also noticed that the cannot resume dead coroutine
error in completion-nvim usually happens when I'm trying to complete JSX but not normal JavaScript/TypeScript code
React. // this rarely shows this error
<ComponentName // this usually shows the error
LSC should be slower since it is VimScript, which is orders of magnitude slower than LuaJIT.
That it is more responsive means there are likely inefficiencies in completion.nvim.
Thanks for testing.
Ok, I think I found another potential-performance-killing item with upstream Neovim LSP when compared with LSC.
Whilst in insert mode, as changes are being made, LSC sends minimal byte range style didChange
events to the language server, for example:
{
"method": "textDocument/didChange",
"jsonrpc": "2.0",
"params": {
"contentChanges": [
{
"range": {
"end": { "character": 2, "line": 8 },
"start": { "character": 2, "line": 8 }
},
"text": " @\n ",
"rangeLength": 0
}
],
"textDocument": { "uri": "file:///tmp/foo/foobar.rb", "version": 2 }
}
}
Neovim LSP, from my testing, is always sending the complete file contents after every change, for example:
{
"method": "textDocument/didChange",
"jsonrpc": "2.0",
"params": {
"contentChanges": [
{
"text": "class FooBar\n def initialize\n @abc = \"abc\"\n @hello = \"hello\"\n @help = \"help\"\n end\n\n def baz\n @\n end\nend\n\nfoo = Foobar.new\n"
}
],
"textDocument": { "uri": "file:///tmp/foo/foobar.rb", "version": 6 }
}
}
How will the language server know what's changed via Neovim LSP didChange
? A de-difference server side?
I just tested a 3,000 line JavaScript file and Neovim LSP is sending the full 3,000 lines after every keypress in insert mode (or maybe until CursorHold?).
This seems highly inefficient to me.
LSC on the other hand is sending very small range diffs, here is a example of the packets sent whilst I was appending [1, 2, 3]
into a huge JavaScript file:
{
"method": "textDocument/didChange",
"jsonrpc": "2.0",
"params": {
"contentChanges": [
{
"range": {
"end": { "character": 6, "line": 419 },
"start": { "character": 6, "line": 419 }
},
"text": "\n ",
"rangeLength": 0
}
],
"textDocument": {
"uri": "file:///home/dennis/projects/platters_app/src/components/react.development.js",
"version": 2
}
}
}
{
"method": "textDocument/didChange",
"jsonrpc": "2.0",
"params": {
"contentChanges": [
{
"range": {
"end": { "character": 6, "line": 419 },
"start": { "character": 6, "line": 419 }
},
"text": "[]",
"rangeLength": 0
}
],
"textDocument": {
"uri": "file:///home/dennis/projects/platters_app/src/components/react.development.js",
"version": 3
}
}
}
{
"method": "textDocument/didChange",
"jsonrpc": "2.0",
"params": {
"contentChanges": [
{
"range": {
"end": { "character": 7, "line": 419 },
"start": { "character": 7, "line": 419 }
},
"text": "1, 2, 3",
"rangeLength": 0
}
],
"textDocument": {
"uri": "file:///home/dennis/projects/platters_app/src/components/react.development.js",
"version": 4
}
}
}
Not being an LSP expert, but Neovim LSP sending the complete file buffer for every insertion change via didChange
seems like it could be very bad for language server performance?
It seems neovim's LSP implementation problem. You can create the issue.
I thought neovim built-in lsp supported it, but it seems to be my misunderstanding.
https://github.com/neovim/neovim/blob/master/runtime/lua/vim/lsp.lua#L813
@hrsh7th,
I logged Ruby and JavaScript language servers with small or huge files, Neovim is always sending the full buffer payload.
And it happens pretty frequently whilst a user is typing in new content. If one types slow enough it is after every key press.
The original poster of this issue is dealing with a big repo and maybe even big files, so the current didChange
behaviour could well be a factor.
Note, I think other strategies such as debouncing are still worth exploring in completion.nvim.
Best regards.
Very sorry for the late replied(again).
@bluz71 If debouncing means a timers intervals of checking changed event, completion-nvim
already have that and it's in fact a configurable variable g:completion_timer_cycle
, the default value is 80 though which is much lower than it's in LSC. The isIncomplete
issue should be fixed though and I'll be working on it.
@prabirshrestha Thanks a lot for the information. completion-nvim
is highly inspired by asyncomplete.vim
, so thanks for your work on that:)
@hrsh7th May I ask you how do you find out the server is not supporting isIncomplete
? For sumneko_lua it just keep sending isIncomplete=True
and it's quite annoying.
If I want to know whatever the servers support it, I investigate lsp-client's logs.
@bluz71 If debouncing means a timers intervals of checking changed event,
completion-nvim
already have that and it's in fact a configurable variableg:completion_timer_cycle
, the default value is 80 though which is much lower than it's in LSC. TheisIncomplete
issue should be fixed though and I'll be working on it.
Debouncing means buffering up textDocument/didChange
events and sending only one request to the language server after a certain period of inactivity (say 200-500ms) rather than after every keypress.
In the case of Neovim LSP + completion.nvim I setup g:completion_timer_cycle
to 2 seconds and monitored communication to and from the Ruby Solargraph language server.
Every single key press in insert mode instantly triggers a textDocument/didChange
request. And Neovim LSP currently sends the entire buffer document currently which is highly inefficient. This issue has been opened at Neovim about that.
Currently completion.nvim does not debounce whilst a user is typing content.
Is that a big deal? Currently yes because Neovim LSP is sending full buffer. If it sent incremental didChange
maybe it would not be a big deal. @prabirshrestha is not convinced debouncing helps that much. However LSC does have 500ms debounce and it seems pretty good.
Okay I understand. This doesn't seems like an issue in completion-nvim though, because the textDocument/didChange
is not handle here. The way completion-nvim
handle completion is by using a timers to monitor b:changedticked
, and the timer cycle is control by g:completion_timer_cycle
. It can be changed to something smarter though, like using nvim_buf_attach
to trigger callbacks with changedticked event.
Hello @haorenW1025,
Indeed, I am now learning that textDocument/didChange
is being triggered by upstream Neovim LSP. This is being discussed in Neovim didChange
issue 13049. Currently the full buffer is sent on every insert mode keypress and an interim patch to add incremental textDocument/didChange
exposed quadratic requests per keypress being sent to the language server (not wanted at all).
So it is hard at the moment for me to recommend any changes to completion.nvim to improve performance, at least not until upstream Neovim didChange
behaviour is optimised.
For now, whilst waiting for upstream to fix the Neovim 13049 issue, completion.nvim should concentrate on improving isIncomplete?
behaviour as well as:
The way completion-nvim handle completion is by using a timers to monitor b:changedticked, and the timer cycle is control by g:completion_timer_cycle. It can be changed to something smarter though, like using nvim_buf_attach to trigger callbacks with changedticked event.
Best regards.
I've tried to fix the redundant complete request with isIncomplete
in #249. Next step is reworking completion by using nvim_buf_attach
which will reduce extra timers overhead.
Hi,
If that can be of any help.
Also using completion-nvim in a quite large codebase (+10K files indexed w/ ccls)
Out of the box completion was almost unusable due to the massive amount of symbols triggered on the first key.
Increasing completion_trigger_keyword_length
to 3, improved things big a great deal.
Now, that's the first time I have a fully working LSP for this project.
Thanks !
@sl8vz Thanks for the feedback, that's actually really interesting. So the slow respond issue may be related to the lsp giving back too many items and completion-nvim is having a hard time processing them.. Maybe some optimization on string processing or limited the items from lsp may help if that's the case.
Sorry to put this stupid question here, but couldn't you simply analyze how much time of execution gets spend for the different parts of completion-nvim? This might not reveal the higher level issue, but such as the complex sorting from @haorenW1025 his last comment it would detect.
I actually also think it would really benefit a lot to add some debounce on user input, it makes (to me) no sense to trigger the completion every time I press a key while I'm typing a full word, for example, I want to write 'feedback', it's trying to show me completion for each letter and it's noticeably lagging and freezing while writing.
Issue #231 (closed in favour of this) requested a debounce like the LSC plugin (which uses 500ms completion debouncing, along with incremental textDocument/didChange
requests).
LSC feels more responsive to me than Neovim LSP + completion.nvim even though it is written in Vimscript whilst the latter are LuaJIT (which is thousands of times faster than Vimscript).
I also feel debouncing should be used.
This comment by @prabirshrestha is also worth exploring.
I just tested out the nvim-compe completion plugin and in my quick testing it feels smoother and more responsive than completion-nvim. The plugin supports Neovim's LSP.
I only found out about nvim-compe a day or so ago. It is by the same developer of vim-vsnip (which has been my preferred snippet plugin for a while now).
I am switching over.
I think that this plugin (completion-nvim) is indeed future as it will probably be in nvim core, but for now, it still need improvements, I am also switching as nvim-compe feels smoother but I will be back asap this is improved.
I think that this plugin (completion-nvim) is indeed future as it will probably be in nvim core [...]
I don't think that's true. Neovim core already provides completion via omnifunc=v:lua.vim.lsp.omnifunc
. As far as I know there are no plans for as-you-type-completion (a.k.a auto-completion).
I just tested out the nvim-compe completion plugin and in my quick testing it feels smoother and more responsive than completion-nvim. The plugin supports Neovim's LSP.
@bluz71 thanks for this! I just tested it & it's a massive difference. Instant switch.
I think that this plugin (completion-nvim) is indeed future as it will probably be in nvim core, ...
@lucax88x just because I'm curious: why do you think so? :thinking:
@weilbith Probably because of my ignorance, but I saw it's under nvim-lua and I just did 1+1..
I'm facing massive performance issues with a project that I'm working on which contains a lot of files. In this codebase completion is nearly unusable, causes lots of Lua & out of memory errors & crashes the native LSP.
One of the very common errors I get is this
To make sure this was not the native LSP itself, I did the following:
nvim-lspconfig
but withoutcompletion-nvim
setlocal omnifunc=v:lua.vim.lsp.omnifunc
) -> everything ran very smoothly & fastcompletion-nvim
I tried to see if
coc.nvim
has the same issues or no & I tried that & it worked fine too.This is my
completion.nvim
config & this is myLSP
configNote, the same exact LSP config works fine without
completion.nvim
which also means that the TypeScript LSP is not the culprit too.