microsoft / vscode

Visual Studio Code
https://code.visualstudio.com
MIT License
163.8k stars 29.12k forks source link

Asynchronously show completion items from different sources or extensions! #107343

Open HaoboGu opened 4 years ago

HaoboGu commented 4 years ago

Currently, completion items from different sources/extension are shown in a synchronous way, which means the result won't be shown before the last completionProvider completes. There are a lot of great auto-complete extensions in the marketplace, some of them can be quite efficient, but the overall completion latency still depends on the slowest extension, which is I think is not reasonable. So I propose to add an asynchronously way to show the completion list from different extensions or sources, just like IDEA: 3

In VSCode:

2

Here I added a delay manually in my own extension, but as you can see, the results come from vscode-java extension are also affected, which results in bad user experience.

Please consider this, thank you!

jrieken commented 4 years ago

Yeah, this isn't the first time this issue comes up and not the only place where is comes up, e.g there are related discussions around reference search results. These are the challenges which need good ideas:

  1. When showing results as they come you will always see snippets and word-based suggestions first. Those are computed in the renderer or in a separate worker and are always very fast. So, you will always see results first that are of lower quality.
  2. We have logic to make "better" providers mute results from other providers. E.g a provider that is registered specifically for that language overwrites results from a provider that is registered for all languages (using the * document selector). We do this to hide word-based suggestions in the presence of "good" suggestions (tho not everyone likes this). However, this logic requires all providers to have finished.
  3. Sorting/ranking is an issue. Image you have the Foo-prefix and quickly get a suggestion that is barFangBoo. A split second later comes another suggestion which is FooBar. This suggestion should be ranked higher which means the widget needs to be re-sorted while it is showing. That can be very problematic and confusing to users. Image in that moment you wanted to select the "barFangBoo" suggestion. So either you resort of the cost of confusion or you simply append at the cost of not showing the best suggestions atop (which is amplified given that "simple" providers are likely to be faster)
HaoboGu commented 4 years ago

Hello jrieken, thanks for your reply!

  1. It's great to have word-base suggestion in plaintext files. But in general code files, the word-base suggestion won't help a lot. So I think word-base suggestion can be disabled if there exists other completion providers, not just be overwritten.

  2. If the lower quality suggestion is disabled, the overwriting mechanism is no longer necessary.

  3. This is the core issue IMO. I did some tests on IDEA intellij, there are some interesting tricks I observed: 1). The first 5 results will never be resorted/overwritten 2). Snippet results do appear on the top of the list sometimes. However, if I don't choose them, they will be filtered when I keep typing new characters and other better results come to top 3). All resorting/reranking operations have a time-out, after that, no resorting/reranking will be performed. In the current version of IDEA, this time-out seems to be 200ms-300ms. So for me, the reranking is done before I can check all the results. It isn't confusing me at all 4). The reranking will be triggered again when I keep typing with filtering old items

So how about setting a hard time-out for reranking to prevent confusing users? In this case rank score of completionItems might invalid, but I think it's acceptable. Because the order can be fixed if the user chooses to keep typing.

jrieken commented 4 years ago

So I think word-base suggestion can be disabled if there exists other completion providers, not just be overwritten.

That's a bold assumption which isn't generally true. There are many cases in which word-based suggestions make sense - despite having a smart completion provider. E.g typing inside comments, typing inside string, cases in which the smart provider has issues/bugs.

HaoboGu commented 4 years ago

There are many cases in which word-based suggestions make sense - despite having a smart completion provider. E.g typing inside comments, typing inside string, cases in which the smart provider has issues/bugs.

As far as I know, the word-base suggestion is disabled by default in code files right now. I just tested it, even though When I was typing inside string or comment, it wasn't triggered. Jietu20200925-165145

Actually, according to my experience, I don't see any inconvenience in other IDEs which don't have word-base suggestions.

If we cannot just simply disable the word-based suggestion, why don't we just take advantage of asynchronous ability?

We don't have to wait all completion providers, word-base suggestions are hided if there exists completion items come from other providers or when the first other asynchronous result comes. If better results are really really late, rerank the completion list and put word-base completion at the bottom if the user continues to type.

jrieken commented 4 years ago

You might have word based completions disabled. Again, we will not have a discussion about the removal of word-based completions. This is not an option!

HaoboGu commented 4 years ago

Yeah, OK. I am wondering can the overwriting logic you mentioned above be migrated to an asynchronous way?

The main problem of introducing asynchronous suggestion is how do we update completion list. Combining a hard time-out and reranking when the user continues to type is a feasible way IMO. How do you think about this plan?

jrieken commented 4 years ago

The simplest would be this

  1. Always wait for the first non-simple provider (non word-based, non snippet)
  2. Then update the list as more results come in
  3. Don't resort when more results come in
  4. Resort when more prefix text is typed
HaoboGu commented 4 years ago
  1. Then update the list as more results come in

Does 'update' mean appending results at the bottom of the list?

jrieken commented 4 years ago

yes

HaoboGu commented 4 years ago

This looks good to me!

HaoboGu commented 4 years ago

Any plan on this issue?

jaredtmartin commented 3 years ago

Please do this! I just switched from Sublime and the sloooow snippets are the #1 problem I'm having. I tried removing extensions, but there are some that are essential. I already have muscle memory for some snippets and I type the snippet name and hit the tab key all in one move. This delay simply breaks my workflow.

aejuice-github commented 2 years ago

Simple sublime does a much better job on snippets. Why is it so hard to load my snippets fast and not wait for everything else??

Ares9323 commented 2 years ago

Simple sublime does a much better job on snippets. Why is it so hard to load my snippets fast and not wait for everything else??

This is still an issue for me after years, fortunately Copilot has kinda replaced my need for snippets, it's kinda nonsense that an online IA is faster than the C++ intellisense on my PC (everything is stored in a PCIE 4 SSD).

W4RH4WK commented 2 years ago

2. We have logic to make "better" providers mute results from other providers. E.g a provider that is registered specifically for that language overwrites results from a provider that is registered for all languages (using the * document selector). We do this to hide word-based suggestions in the presence of "good" suggestions (tho not everyone likes this). However, this logic requires all providers to have finished.

Is there a way to turn this behavior off? I have enabled wordBasedSuggestions for C++ but I still only get semantic suggestions from the C/C++ extension.

As complex as C++ is, unfortunately, sometimes the wanted (sub)string does not show up with semantic completion. Word-based completion, however, has the string I need.

jrieken commented 2 years ago

Is there a way to turn this behavior off? I have enabled wordBasedSuggestions for C++ but I still only get semantic suggestions from the C/C++ extension.

AFAIK C++ disables word-based suggestions per language and therefore it must be specifically disabled per language, e.g in your settings.json file have a section like


"[cpp]": {"editor.wordBasedSuggestions": true},
W4RH4WK commented 2 years ago

AFAIK C++ disables word-based suggestions per language and therefore it must be specifically disabled per language, e.g in your settings.json file have a section like

"[cpp]": {"editor.wordBasedSuggestions": true},

That's already there. It's what I meant by I have enabled wordBasedSuggestions for C++.

samu commented 2 years ago

I'm also interested in this, or more specifically, it would be great if user defined snippets would show up instantaneously.

My use-case is this: i have small snippets configured such as this one

  "pipe": {
    "prefix": "p",
    "body": ["| "],
    "description": "Insert single pipe"
  },

which allows me to use p+tab to quickly insert a |, a character which, without this snippet, is a bit of a stretch to type.

In the editor i previously used (Atom) i had the exact same snippets, and i was able to use the p+tab shortcut reliably. The delay in vscode unfortunately makes it quite unreliable: it often happens that the suggestions are simply not loaded yet when i press tab, and so i will be left with "p " instead of "| ".

I noticed that, once i had suggestions displayed to me in the active editor, they show up much faster next time. Perhaps a first step towards a solution would be to provide a setting editor.suggestions.loadOnActivation which, when set to true, runs the providers and fills the cache.

Eshnek commented 2 years ago

(@samu)

I'm also interested in this, or more specifically, it would be great if user defined snippets would show up instantaneously.

👍+1 to this, I have similar snippets defined such as

"class_hxx": {
    "prefix": ["hxx", "ee"],
    "body": [
        "#pragma once\n\nclass $1\n{\npublic:\n\n\t$0\n\n}; // class $1\n"
    ]
},

Half of the time I have to wait more time than it would take to manually type the snippet, when really this result should be shown in < 50 ms in 100% of scenarios.

In my scenario the C++ extension takes a while to parse a newly created file, which is when I typically use this snippet. Extensions are not available until it finishes parsing.

One far-from-ideal workaround is to use AutoHotkey to define another shortcut directly for the snippet.

I think the user-defined snippets showing up quickly would be less difficult to implement than a full async result mechanism for each provider. Fully Async results here would be amazing.

phil294 commented 2 years ago

The simplest would be this

  1. Always wait for the first non-simple provider (non word-based, non snippet)

I'm not sure this is a great idea: One of the main advantages of this very feature would be having word-based completions while the (only) dedicated provider still loading, if it's a very slow one. You'd be taking away arguably the largest percentage of where async completions would be helpful.

In the past, several folks voted for this functionality around the C/C++ extension because it was notoriously slow (probably resolved by now). However, this is most likely a recurring problem. For example, completions by Crystal language support lsp can sometimes take 3-5 seconds. This waiting time could be largely improved by having word based suggestions in the meantime, and also with #21611.

These are definitely difficult trade-offs though, especially regarding the ordering of suggestions, as outlined by jrieken above. As always, the best solution will be to make it configurable. How about adding this feature as experimental and default disabled, so we can gather more feedback from interested users? Same thing for #21611.

MarcWeber commented 1 year ago

Maybe a solution would be show all completions in a [x] list and allow to choose which ones are active. Then at least you can choose and avoid the slow one if you don't care about it temporarily.