Open ray-x opened 3 years ago
Benchmark is possible to implement. But parallel request is impossible in current neovim.
@ray-x I'm experiencing the same, but your issue gave me an idea. Since according to @hrsh7th parallel requests are not possible, then what about lazy loading the completion elements?
For example: cmp-spell gives spell suggestions using vim's spellsuggest
, which ends up giving a vast amount of completion items and hence slows down nvim (mostly when typing). @hrsh7th would it be possible to only "pull", for instance, the first 5 completion items from x source, and then when nvim-cmp's popup/floating window is scrolled load another five and so on and so forth?
Hope it helps :+1:
I don't think spellsuggest returns a lot of items. How to reproduce such behavior?
Vim spellsuggest API is slow and it blocks your neovim.
Path source is depends on disk speed.
Basically what @Shougo said. To reproduce the spellsuggest behaviour, well, add the cmp-spell
source and just type fast or keep pressing backspace in a markdown file, you should be able to see some "lag".
If spellsuggest itself is slow, there's nothing we can do because spellsuggest API is vim native and only allows synchronous call, right?
The path source is using Luv API so it work as parallel (partially).
The slow
is not the same meaning as lag
.
Path source is depends on disk speed.
This means slow
. Because cmp-path uses Luv
API so it wont cause the lag.
Vim spellsuggest API is slow and it blocks your neovim.
This can be the reason to the lag.
Some suggestions:
sources = {
{name = 'nvim_lsp'}, {name = 'buffer'}, {name = 'look'},
}
That will let cmp get items from lsp first and then buf and then look. When timeout, drop the result if low priority sources do not finish.
Alternatively use the priority
field.
e.g.
sources = {
{name = 'nvim_lsp'}, {name = 'nvim_lua', ft='lua'},{name = 'look', ft={'markup','html'}},
}
this will only enable nvim_lua
for lua, and look
for markdown and html file types.
This setup will be more consistent with other setups for sources, e.g. keywork_length
setup.
Also, autocmd setup does not work when I lazyload the plugin.
The same issue.
I found that nvim-cmp
is a little bit slower than nvim-compe
I can't figure out it. How slow it?
I found that nvim-cmp is a little bit slower than nvim-compe
Isn't that relative to the amount of sources/which sources you are using?
I found that nvim-cmp is a little bit slower than nvim-compe
Isn't that relative to the amount of sources/which sources you are using?
sources = {
{ name = 'nvim_lsp'},
{ name = 'path'},
{ name = 'nvim_lua'},
{ name = 'vsnip'},
{ name = 'calc'},
{ name = 'emoji'},
{ name = 'tags'},
},
But I use the same sources in nvim-compe
and it's really fast.
Can you upload the examples?
I have a feeling the difference is caused by throttle_time = 80
setup.
I have a feeling the difference is caused by
throttle_time = 80
setup.
Is there a way to setup throttle_time
in nvim-cmp
?
I've improved the performance. Could you test with latest?
And throttle_time was removed. Currently, I'd specified 50ms as a hardcoded value.
I suspect the following codes: https://github.com/hrsh7th/cmp-buffer/blob/5dde5430757696be4169ad409210cf5088554ed6/lua/cmp_buffer/buffer.lua#L43 https://github.com/hrsh7th/cmp-buffer/blob/5dde5430757696be4169ad409210cf5088554ed6/lua/cmp_buffer/init.lua#L59
need to be double-checked.
From what I see. If the buffer source is in processing state, the callback will be updated after 50ms
(hardocded). 100ms timer and 200ms timer has some impact on the overall performance.
I've improved the performance. Could you test with latest?
And throttle_time was removed. Currently, I'd specified 50ms as a hardcoded value.
I use commit b6b15d5f6e46643462b5e62269e7babdab17331c.
It works well.
Now I switch nvim-compe
to nvim-cmp
.
😄
@hungrybirder since you just migrated from compe, I'd recommend you my cmp + luasnip + friendly snippets + autopairs setup and my cmp config :)
@hungrybirder since you just migrated from compe, I'd recommend you my cmp + luasnip + friendly snippets + autopairs setup and my cmp config :)
I'm using cmp + autopairs + (cmp-vsnip+vsnip+friendly-snippets). I'll check luasnip later and learn your config.
Thank you.
Is it possible to cache all the completion items from sub-packages from the start? E.g. I'm writing Python with TensorFlow package, and after I entered tf.keras.
I need to wait for about 1~2s for the completion menu to show up.
Or maybe the slow speed would be due to LSP client not nvim-cmp? I'm using pylsp for Python to be clear.
I think the speed really matters to really make neovim competitive with modern IDEs, but the is not a simple problem since I expect that many cross-repo. issues are involved. Somehow disappointing...
It seems LSP slowness. pylsp use jedi and it is slow.
You can use pyright lsp instead.
@Shougo: It seems that pyright will cause high-cpu usage so I change to use pylsp. Anyway, I will try out pyright again.
Is it possible to cache all the completion items from sub-packages from the start?
I think pyright implements this feature, and it uses CPU usage. So it is trade off.
[...]
You can use pyright lsp instead.
@Shougo: I just confirmed that pyright is indeed (much) faster. Thanks for your help! I remember that I migrated to pylsp due to the CPU usage issue, and I didn't notice that performance issue on (somehow) large packages.
Now another problem is that with pylsp, I can facilitate formatting tool python black package. What's the recommended alternative to pyright?
UPDATE: it turns out I have to install back diagnosticls and run pip install isort black
to use those linters integrated with pyright.
[...] Also would be good to show some statistics on the source performance [...]
Let me continue this thread. While this might be off-topic but it would be cool if these statistics can be shown on the status line, or even by plugins like nvim_notify when loading is complete.
As I adding more sources to cmp the cmp is getting slower. Is it possible poll the source in parallel and timeout the slow one? Also would be good to show some statistics on the source performance. I have no clue which source is slow and should be turned off. (tablenine is an obvious one)