microsoft / vscode

Visual Studio Code
https://code.visualstudio.com
MIT License
162.71k stars 28.7k forks source link

Add streaming support in vscode #105870

Open heejaechang opened 4 years ago

heejaechang commented 4 years ago

this is a conceptually dup of https://github.com/microsoft/vscode/issues/20010. creating a new issue since the previous one is closed and that one is using custom streaming support rather than official streaming (partial results) support added to LSP.

Language Server Protocol has added partial results (streaming) supports (https://microsoft.github.io/language-server-protocol/specifications/specification-current/#partialResults) and VS has already added support for it.

As long as I know, the support is added to LSP since it was one of the top complaints from VS users, especially from people who have large codebases.

For example, Finding Symbols from the workspace (WorkspaceSymbol) can easily take several seconds for big codebase and return a lot of results, Streaming takes the same amount of time to get full results, but most of the time, users get what they want before getting full results. so rather than users get the worst-case scenario all the time, streaming reduces the time users have to wait.

VS has FULL SUPPORT for streaming. (Find all references, Document symbols, document highlights, Pull model diagnostics, workspace symbols and completion)

So they even support partial results (streaming) for document-level features such as "Document Symbol", "Document Highlights" and etc.

but, we (Python LS for vscode - Pylance) are not asking that much, but at least support for workspace-wide features such as "Workspace Symbol", "Find all references", "Call Hierarchy" and etc.

We do understand streaming (partial results) requires an incremental update of UI and that is not easy (flickering, ordering, sizing, grouping, etc issues). I believe VS had that issue as well when they started adding streaming supports (even before LSP), but once it is done, I believe they got a lot better at supporting large codebase.

heejaechang commented 4 years ago

tagging @milopezc who did VS LSP streaming support. tagging @dbaeumer as FYI.

heejaechang commented 4 years ago

tagging @gundermanc who did the VS side work of updating UI incrementally for VS Search. he can provide more detail info but it does things like

  1. wait for users to stop typing before updating the result list.
  2. update the result in a fixed interval (sorting, merging)
  3. freeze items around the user-selected item and etc

and that gave so far very nice experiences to VS. you can try by "Ctrl+Q" in VS.

gundermanc commented 4 years ago

For workspace/symbol specifically, Visual Studio actually supports two slightly different versions of a streaming interaction with slightly different behaviors and guarantees, but both:

Roslyn, for example, uses this guarantee to do a long-running brute-force search through all symbols in the project, evaluating in order of likelihood of a match (recent projects, close-by projects, etc). This means that the majority of the time, the symbol is found instantly, but in some cases you may have to a wait a few seconds for it to be found.

GoTo:

Symbol and file navigation only box in VS.

image

Ctrl+Q/VS Search:

Search aggregation 'global search' box in VS for menu commands, options, templates, files, and symbols.

Doc with more detailed aggregator architecture for Microsoft internal viewers: https://microsoft-my.sharepoint.com/:w:/p/chgund/EWPw43pfMghFj9LCcj2K8gYBTJKGoz5ETKWKIRraTeaqnA?e=1awZO6

CtrlQ2

heejaechang commented 3 years ago

typescript has the same, cancellation/partial results issue. users once invoked "find all references", they have to wait until it is done. and all results show up at once.

typescriptFAR

heejaechang commented 2 years ago

another asks from users related to streaming. (https://github.com/microsoft/pylance-release/issues/2236)

basically, user has multi-language workspace and do not want one LS to slow down whole "go to symbols" experience. streaming support will improve the experience since it won't let vscode be blocked by any one LS.

luabud commented 2 years ago

@jrieken while providing full streaming support is a super complex task, would it be feasible to consider it for workspace symbols only, at least as a "first priority"? 🤔

heejaechang commented 2 years ago

I think workspace symbols is the only feature currently that is workspace wide but also, doesnt have any semantic scoping. for example, other workspace wide features such as find all references, rename, peek referenecs, rename files only work on files where there are semantic dependencies (in other words, find all references on python symbol won't search java, ts, and etc files even if they are in same workspace), but workspace symbol will search all of them.

so, if we need to choose one, it will be workspace symbol that require streaming at least.

nickzhums commented 2 years ago

Sharing some thoughts from the Java team, streaming support is extremely useful for several cases 1 Code completion - Completion responsiveness and performance is always among the top asks from developers and streaming support would greatly enhance this aspect. Support incremental loading in code completion suggestions will have a huge impact on developer satisfaction. 2 Reference view (find all references, workspace symbols, etc). This is often brought up by Java developers with complex codebases which is becoming increasing common (Since more professional Java developers are adopting VS Code). With those big projects, we find asks in this area quite frequent in our surveys. This one is aligned with the Python's team thoughts.

  1. Support streaming in general will improve UX in a lot of areas Would love to see how we can support this :)
ejgallego commented 1 year ago

In our case coq-lsp, support for streaming textDocument/diagnostic would be very useful.

ljw1004 commented 1 year ago

What is the current status of this request, please?

For Hack LSP, we wish to add streaming support to textDocument/references. For some of our large projects, we can compute the first few references within milliseconds but it takes P95=2mins until we've computed the final references. We would love to display the first few references quickly.

findleyr commented 1 year ago

In gopls there's a tension in our completion logic between latency and search depth. Being able to stream results would eliminate that tension.

DanTup commented 1 year ago

In gopls there's a tension in our completion logic between latency and search depth. Being able to stream results would eliminate that tension.

Dart has this dilemma too. Users expect code completion to include all symbols including those that have not yet been imported into the current file (they will auto-import when selected), but this full list is much more expensive to compute. The result is that we assign a time budget and stop building completions after that time (because otherwise completion could appear very slow), but this has the result that completion results can appear inconsistent (particularly on slower machines).

What I'd really like to do is send all of the locally imported items first (which can be computed very quickly) because most likely the user wants something from that list, but still be able to provide the larger list. Using isIncomplete=true doesn't help much because truncating the list based on which items were discovered first (rather than ranked) doesn't provide good results, and truncating the list at all without compiling the entire list could result in exact matches being excluded (and the user cannot type any additional characters to trigger further searching).

JamyDev commented 7 months ago

For our internal LSP at Uber we would love to have this as well to avoid delays from the native LSP's, in favor of our much faster cached results. cc @isidorn

findleyr commented 2 months ago

Recently, the Go and Dart LSP teams at Google discussed our wish list for LSP+VS Code features. At the top of this list were:

In combination, we feel those three could help us deliver a much more responsive and rich language experience. Is there anything we can do to help these get prioritized?