emacs-lsp / lsp-mode

Emacs client/library for the Language Server Protocol
https://emacs-lsp.github.io/lsp-mode
GNU General Public License v3.0
4.76k stars 874 forks source link

Implement support for result streaming [3.15] #1117

Open yyoncho opened 4 years ago

yyoncho commented 4 years ago

The upcoming https://microsoft.github.io/language-server-protocol/specifications/specification-3-15/ version of the protocol will support streaming results(e. g. find-references, completion, find definitions, etc). We will have to find a way to represent that data in lsp-mode.

A possible solution would be to Implement streaming support on top of helm/ivy and enable it only when the user uses them. When they are not present - fallback to standard xref and do not declare streaming support.

cc @dgutov as the maintainer of company and xref(?).

dgutov commented 4 years ago

If you have a good grasp of the protocol, why not propose an extension values allowed by xref? And we can add that in a future version. Maybe also release xref as a "core package".

So far I'm not sure which shape it should take. An iterator? A promise? A chunked iterator?

yyoncho commented 4 years ago

Does anybody know a server implementing that part of the spec? @alanz @robertoaloi @ericdallo @sebastiansturm @kiennq ?

ericdallo commented 4 years ago

There's no support currently for dart LSP server

robertoaloi commented 4 years ago

Hi @yyoncho and sorry for the late reply. Erlang LS does not yet support streaming of results. It only has support for progress reports at this stage.

yyoncho commented 4 years ago

Thank you, I will ask in language server protocol repo. Hopefully, it has been implemented somewhere...

shoobinger commented 3 years ago

@yyoncho I'm planning to implement the partial results feature in the language server for Kotlin I develop. For now I'm working on code completion, but other features would support streaming as well. The language server is in an early WIP state now, but it's usable, and I guess it can be used to develop and test the feature client-side as soon as it is ready on my part. Also I will be glad to help with the client side changes

nbfalcon commented 3 years ago

@suive how is your language server different from https://github.com/fwcd/kotlin-language-server? In any case, I can offer you my thoughts on implementing this on lsp-mode's side:

Note that these are only suggestions, and there might be a better design I haven't though of.

yyoncho commented 3 years ago

@suive thank you.

For completion, we should coordinate with @dgutov. AFAICT most of the work/challenges will be on company-mode side. It seems to me that we won't be able to fit streaming it in capf without going to emacs-devel and waiting for new emacs release. This means that the practical solution will be to implement it as a traditional company backend and then push it in capf when appropriate. But it all depends on @dgutov's view.

cc @kiennq

@nbfalcon sounds good. We might create integration tests for streaming results.

dgutov commented 3 years ago

I really wish people visited emacs-devel more, not sure what else could restart the process of getting asynchronous completion support in capf. And streaming, too. All ending up in a new-and-rewritten version of capf.

But starting on it in company should be fine too. We can consider it incubating new API features, as well as experiment with how the change should look in the UI.

BTW, is streaming something that gives a measurement improvement for the user experience? Reduced latency till the first popup display, less memory consumption on the backend, all of that?

I just figured if you need to do both fuzzy matching and the appropriate sorting of matches, you need to do the full search for them on the backend anyway, and then you might only be able to save on serialization and bandwidth.

kiennq commented 3 years ago

I think at least it will reduce the latency until the first popup. The part of fuzzy matching and sorting is interesting. Mainly how should we approach that:

Also, how company-mode going to deal with this streaming result? Should company-mode provide special popup updating notification that capf should call to let company-mode knows there's new items available?

dgutov commented 3 years ago

I think at least it will reduce the latency until the first popup.

It can, but I think the question deserves some benchmarking. And also, maybe a comparison against YouCompleteMe's original approach, which was simply to limit the number of returned completions to 1000 (they did not cache on the client, obviously).

New items should fuzzy matched, sort in correct place?

If you can [do the sorting properly on the client]. Of course, the user shouldn't notice that happening until they type some new char, or maybe press <down> or M-n (this part will require explicit new support in company). The popup shouldn't blink or shuffle items around while the user is not interacting with it.

Should company-mode provide special popup updating notification that capf should call to let company-mode knows there's new items available?

I think it should be a "pull" interface (like, there is a container, it reports that the current set of items is incomplete, and you can call it in a special way to get more items). Alternatively, it can be an extension of the current "async" interface where you will be able to pass the new arriving items to the callback multiple times (the callback's sig will change from (lambda (result)) to (lambda (result &optional more-to-come))). The former might be closer to how the capf author will like it, but the latter could be a little easier to implement (it will probably get translated to some kind of "pull" interface internally, though).

nbfalcon commented 3 years ago

I don't see why we would need result streaming for completion at all. For things like lsp-ivy-workspace-symbol it makes sense, because there could be thousands of candidates, and some symbols might be in indexes not yet loaded from disk. But auto-completion is only based on parsing most of the time, so all information is available at once to the LS, not lazily loaded. Disclaimer: I am not an LS developer, so this might be nonsense.

ericdallo commented 3 years ago

Just FYI, there was a time Dart LSP server was returning thousands of completion items as you can see in this table, there are other languages info too, wouldn't this streaming help with that?

nbfalcon commented 3 years ago

We'd just get thousands of result across multiple json responses. Would that really be helpful? We'd be spending the same time parsing (perhaps even more).

shoobinger commented 3 years ago

completion is only based on parsing most of the time

Well, this unfortunately is not the case for Kotlin, where a more comprehensive analysis is needed to get extension properties (those can be scattered anywhere in the project and dependencies). We can let user get fast-to-compute local completion results, and then asynchronously send the other (more expensive to compute) part. That's how they do it in Intellij IDEA, I believe.

dgutov commented 3 years ago

@suive How do you deal with fuzzy sorting them, then?

shoobinger commented 3 years ago

@dgutov In my opinion it will be a good idea to lift this restriction, if there aren't many technical difficulties:

Of course, the user shouldn't notice that happening until they type some new char, or maybe press or M-n (this part will require explicit new support in company). The popup shouldn't blink or shuffle items around while the user is not interacting with it.

In my experience there is nothing wrong with a completion hover window updating without user interaction. I think ability to get (partial) results faster is more important in this case. Again continuing comparison with IDEA, they manage to update the window in the background, so you see new items appearing in the window, not necessarily in the end of the list. The selection is kept on the item selected by the user.

In our case we either rely on the client-side sorting (re-evaluating it after each chunk), or treat each update as a full replacement, this way moving sorting to the backend (bandwidth is not even an issue, in my opinion). But the second solution is not the way it is intended in the protocol, I believe.

dgutov commented 3 years ago

Again continuing comparison with IDEA, they manage to update the window in the background, so you see new items appearing in the window, not necessarily in the end of the list. The selection is kept on the item selected by the user.

Hmm, suppose I should see it first. VS Code does that as well, I imagine?

Whether we can do this in company also depends on the rendering methods available to us (overlays and child frame based popups). Like, if doing that will introduce extra glitches/blinking, then I'd wonder whether that is worth it.

yyoncho commented 3 years ago

Like, if doing that will introduce extra glitches/blinking, then I'd wonder whether that is worth it.

Posframe solutions won't blink, right?

dgutov commented 3 years ago

We should test that with some prototype.

One reason I'm not using a posframe based frontend yet personally, is they're generally slower (on a GTK3 Emacs build) and do show some artefacts from time to time.