hrsh7th / nvim-cmp

A completion plugin for neovim coded in Lua.
MIT License
8.1k stars 400 forks source link

[Feature] Slow sources benchmark and timeout #148

Open ray-x opened 3 years ago

ray-x commented 3 years ago

As I adding more sources to cmp the cmp is getting slower. Is it possible poll the source in parallel and timeout the slow one? Also would be good to show some statistics on the source performance. I have no clue which source is slow and should be turned off. (tablenine is an obvious one)

hrsh7th commented 3 years ago

Benchmark is possible to implement. But parallel request is impossible in current neovim.

pocco81 commented 3 years ago

@ray-x I'm experiencing the same, but your issue gave me an idea. Since according to @hrsh7th parallel requests are not possible, then what about lazy loading the completion elements?

For example: cmp-spell gives spell suggestions using vim's spellsuggest, which ends up giving a vast amount of completion items and hence slows down nvim (mostly when typing). @hrsh7th would it be possible to only "pull", for instance, the first 5 completion items from x source, and then when nvim-cmp's popup/floating window is scrolled load another five and so on and so forth?

Hope it helps :+1:

pocco81 commented 3 years ago

I have no clue which source is slow and should be turned off

In my case cmp-spell and cmp-path.

hrsh7th commented 3 years ago

I don't think spellsuggest returns a lot of items. How to reproduce such behavior?

Shougo commented 3 years ago

Vim spellsuggest API is slow and it blocks your neovim.

Path source is depends on disk speed.

pocco81 commented 3 years ago

Basically what @Shougo said. To reproduce the spellsuggest behaviour, well, add the cmp-spell source and just type fast or keep pressing backspace in a markdown file, you should be able to see some "lag".

hrsh7th commented 3 years ago

If spellsuggest itself is slow, there's nothing we can do because spellsuggest API is vim native and only allows synchronous call, right?

The path source is using Luv API so it work as parallel (partially).

hrsh7th commented 3 years ago

The slow is not the same meaning as lag.

Path source is depends on disk speed.

This means slow. Because cmp-path uses Luv API so it wont cause the lag.

Vim spellsuggest API is slow and it blocks your neovim.

This can be the reason to the lag.

ray-x commented 3 years ago

Some suggestions:

When enforce timeout, set the priority for sources based on sources array index when timeout is used. e.g.

sources = {
      {name = 'nvim_lsp'}, {name = 'buffer'},  {name = 'look'},
    }

That will let cmp get items from lsp first and then buf and then look. When timeout, drop the result if low priority sources do not finish.

Alternatively use the priority field.

disable/enable sources for filetype type. Instead of using autocmd on FileType suggested in README.md use this:

e.g.

sources = {
      {name = 'nvim_lsp'},  {name = 'nvim_lua', ft='lua'},{name = 'look', ft={'markup','html'}},
    }

this will only enable nvim_lua for lua, and look for markdown and html file types. This setup will be more consistent with other setups for sources, e.g. keywork_length setup. Also, autocmd setup does not work when I lazyload the plugin.

hungrybirder commented 3 years ago

The same issue. I found that nvim-cmp is a little bit slower than nvim-compe

hrsh7th commented 3 years ago

I can't figure out it. How slow it?

pocco81 commented 3 years ago

I found that nvim-cmp is a little bit slower than nvim-compe

Isn't that relative to the amount of sources/which sources you are using?

hungrybirder commented 3 years ago

I found that nvim-cmp is a little bit slower than nvim-compe

Isn't that relative to the amount of sources/which sources you are using?

https://github.com/hungrybirder/dotfiles/blob/using_nvim_cmp_but_it_is_slower/nvim/lua/module/nvim-cmp.lua

  sources = {
    { name = 'nvim_lsp'},
    { name = 'path'},
    { name = 'nvim_lua'},
    { name = 'vsnip'},
    { name = 'calc'},
    { name = 'emoji'},
    { name = 'tags'},
  },

But I use the same sources in nvim-compe and it's really fast.

Shougo commented 3 years ago

Can you upload the examples?

ray-x commented 3 years ago

I have a feeling the difference is caused by throttle_time = 80 setup.

hungrybirder commented 3 years ago

I have a feeling the difference is caused by throttle_time = 80 setup.

Is there a way to setup throttle_time in nvim-cmp?

hrsh7th commented 3 years ago

I've improved the performance. Could you test with latest?

And throttle_time was removed. Currently, I'd specified 50ms as a hardcoded value.

ray-x commented 3 years ago

I suspect the following codes: https://github.com/hrsh7th/cmp-buffer/blob/5dde5430757696be4169ad409210cf5088554ed6/lua/cmp_buffer/buffer.lua#L43 https://github.com/hrsh7th/cmp-buffer/blob/5dde5430757696be4169ad409210cf5088554ed6/lua/cmp_buffer/init.lua#L59

need to be double-checked. From what I see. If the buffer source is in processing state, the callback will be updated after 50ms(hardocded). 100ms timer and 200ms timer has some impact on the overall performance.

hungrybirder commented 3 years ago

I've improved the performance. Could you test with latest?

And throttle_time was removed. Currently, I'd specified 50ms as a hardcoded value.

I use commit b6b15d5f6e46643462b5e62269e7babdab17331c. It works well. Now I switch nvim-compe to nvim-cmp. 😄

pocco81 commented 3 years ago

@hungrybirder since you just migrated from compe, I'd recommend you my cmp + luasnip + friendly snippets + autopairs setup and my cmp config :)

hungrybirder commented 3 years ago

@hungrybirder since you just migrated from compe, I'd recommend you my cmp + luasnip + friendly snippets + autopairs setup and my cmp config :)

I'm using cmp + autopairs + (cmp-vsnip+vsnip+friendly-snippets). I'll check luasnip later and learn your config.

Thank you.

nyngwang commented 2 years ago

Is it possible to cache all the completion items from sub-packages from the start? E.g. I'm writing Python with TensorFlow package, and after I entered tf.keras. I need to wait for about 1~2s for the completion menu to show up.

Or maybe the slow speed would be due to LSP client not nvim-cmp? I'm using pylsp for Python to be clear.

I think the speed really matters to really make neovim competitive with modern IDEs, but the is not a simple problem since I expect that many cross-repo. issues are involved. Somehow disappointing...

https://user-images.githubusercontent.com/24765272/168724177-e2998b1c-02a0-4261-8418-70bf83f59418.mov

Shougo commented 2 years ago

It seems LSP slowness. pylsp use jedi and it is slow.

You can use pyright lsp instead.

nyngwang commented 2 years ago

@Shougo: It seems that pyright will cause high-cpu usage so I change to use pylsp. Anyway, I will try out pyright again.

Shougo commented 2 years ago

Is it possible to cache all the completion items from sub-packages from the start?

I think pyright implements this feature, and it uses CPU usage. So it is trade off.

nyngwang commented 2 years ago

[...]

You can use pyright lsp instead.

@Shougo: I just confirmed that pyright is indeed (much) faster. Thanks for your help! I remember that I migrated to pylsp due to the CPU usage issue, and I didn't notice that performance issue on (somehow) large packages.

Now another problem is that with pylsp, I can facilitate formatting tool python black package. What's the recommended alternative to pyright?

UPDATE: it turns out I have to install back diagnosticls and run pip install isort black to use those linters integrated with pyright.

nyngwang commented 2 years ago

[...] Also would be good to show some statistics on the source performance [...]

(scroll to top)

Let me continue this thread. While this might be off-topic but it would be cool if these statistics can be shown on the status line, or even by plugins like nvim_notify when loading is complete.