Closed itome closed 4 years ago
@itome what version of the Dart/Flutter SDK are you using? The docs are expected to be added in the resolve
call. There are some tests to check docs are not included in the original request:
It's possible I've missed somewhere though. That change should've shipped in Dart v2.4.
Set
suggestFromUnimportedLibraries = false
has no effects for flutter user, beacauseimport 'package:flutter/material.dart'
imports almost all source of flutter sdk.
It'll have less of an impact, but it should still be significant - there's a lot of unimported code that should still be excluded. I don't think there's much we can do there since they're all valid to use though.
If you can confirm which version of the SDK you're using (and that you're not using a custom version/snapshot of the server from previous testing), I'll do some measuring here.
When we made previous changes, it sounded like this was solved (https://github.com/emacs-lsp/lsp-mode/issues/851#issuecomment-499315973) - does it seem like it's gotten worse, or was the improvement just not enough?
I'm using dart 2.8.0
Dart VM version: 2.8.0-edge.08bd1d6be7ecad43e5921df432efca4d18b6ba41 (Sat Jan 4 19:43:06 2020 +0000) on "linux_x64"
When I checked dart lsp after the previous change, It seems work well, so I thought there were some regression. Can you check it in the dart 2.8?
@DanTup I think split completion response and mark it as isIncomplete
works in this case. Does dart analysis lsp already support response splitting or plan to implement it?
Thanks, I was able to reproduce this - I'm seeing documentation included in the response to textDocument/completion
with the latest nightly build.
I don't think isComplete
would help unless we truncated the results - it would probably make things worse as the client would request a new completion list on every key stroke, and you'd end up deserialising JSON on each keystroke (whereas normally you'd only do it once per completion).
Hopefully we won't need it if we fix this, I'll figure out what's gone wrong.
Oh, I see what's going on... The server only has a resolve-like API that we can use to get documentation for the completion items that come via Suggestion Sets. That means some items (which the server returns in-line) will still have documentation attached. I think this should usually only be a small portion though.
Can you confirm what size JSON you're seeing in the project you're testing with? (If you can repro with a public project and attach the file, that would help too). In https://github.com/emacs-lsp/lsp-mode/issues/851#issuecomment-495194935 it says emacs was taking around 200ms to parse 2MB JSON, but it sounds like you're seeing significantly slower than that - is the JSON significantly larger?
Any news on that?
@ericdallo nothing has changed here recently. It's still not clear how much JSON is being transferred when this is occurring. The comment linked above suggests it takes 200ms to parse 2MB of JSON, so it's not clear if it's much slower than that, or if there is considerably more JSON. Are you able to capture the JSON that's taking a long time to parse?
Thanks, @DanTup, no I didn't capture, but I updated my Emacs from 26 to 28, and it's much faster. I think It's related to the Emacs 27 feature of native JSON parse.
@DanTup Sorry for leaving issue aside. This file is log of lsp. Can you check why the response is so big? (13mb~ response for just one textDocument/completion call)
Looking at the log, I think it's big because:
I don't think there's much to do about 1, though 2 could be an easy win (I don't know if it'll change the deserialise time much though) and we may need to make changes that could affect 3 anyway based on discussions in https://github.com/Dart-Code/Dart-Code/issues/2290.
I think the log is pretty printed. So I don't know whether the whitespace and newline is included in the response or not.
And I noticed that after I send textDocument/completion
in the below positon, the response returned from lsp includes many item that doesn't include Si
. Is this standard behavior?
body: ListView(
children: <Widget>[
Si
^ <- Cursor here.
const SizedBox(height: 24),
the response returned from lsp includes many item that doesn't include
Si
. Is this standard behavior?
I believe that's expected - the full list is sent to the client so that it can be filtered client-side as the user is typing. There's a flag in LSP (isIncomplete=true) that tells the client the list is not complete, so it must go back to the server as the user is typing.
That said, the LSP spec is not explicit about whether the client should call back to the server if IsIncomplete=false and the user presses backspace.. If the intention is that it should, then perhaps these could be filtered on the server. I've opened https://github.com/microsoft/language-server-protocol/issues/954 seeking clarification in the spec.
@itome how is the performance recently with latest versions of everything?
I filed another issue at https://github.com/microsoft/language-server-protocol/issues/1008 with an idea that I think could help us reduce the payload by over 50% (related to point 3 from the comment above), plus I'm still waiting to hear back on https://github.com/microsoft/language-server-protocol/issues/954 which might give some additional options.
@DanTup After the change in email-lsp (https://github.com/syl20bnr/spacemacs/pull/13455), the performance was improved but it still slow compare to using with rust or golang lsp. I believe the server side filtering is the best solution and waiting for the spec is clarified.
I am wanting to register my interest in this item, especially the part about reduced payload. Basically, yes please.
A relevant discussion has occurred in the vim LSC project about the performance of the Dart Language Server; see discussion here.
The issue being that when doing simple term completion it appears that the Dart language server, when in a Flutter project, is sending upwards of 6,000 candidates which then need to be filtered by the language client. In the case of LSC that would be via the VimScript language, which is notoriously slow when processing large lists.
In a JavaScript React project Microsoft's tsserver
appears to be sending about 1/5th the number of candidates (about 1,200). Maybe the Flutter API simply has way more typenames and they are all sent over? I am not quite sure about the intricacies of LSP and term completion.
Nevertheless, any optimisation done via this issue would be appreciated.
Best regards.
The issue being that when doing simple term completion it appears that the Dart language server, when in a Flutter project, is sending upwards of 6,000 candidates which then need to be filtered by the language client.
5,630 completion items were being sent when using Flutter v1.12.13+hotfix.8
I should clarify. Since that last post I upgraded Flutter (and contained Dart) to the latest in the hope that speed would improve.
With Flutter v1.17.3
I am appearing to receive 26,789 completion items (at least according to an echo statement deep in the bowels of the LSC plugin) for the same simple term completion (not a .
completion).
This is bringing the language client to its knees even on a fast machine. Processing this 5 times larger list is too much for the plugin. Auto-completion is now unusable.
Is this the expectation? The Dart language server appears to be sending an ever expanding list of term completions, 5 times more between the above listed Flutter releases, and the LSP client needs quickly filter the big list? Noting that some language clients are implemented in slow languages, it may not be possible to do.
Or are there some flags that I need to set for the language server?
I am starting Dart Analysis Server just via: dart $DART_SDK/bin/snapshots/analysis_server.dart.snapshot --lsp
Any information would be appreciated.
Best regards.
Flutter has organized their packages so that developers generally only need to import a single library (such as package:flutter/material.dart
), but that results in a single very large namespace. I'm not sure exactly how many names are included, but I could sort of believe the 5,630 suggestion number (because the server is probably suggesting not just every class but also every constructor in every class).
But I don't know why we'd be getting 26,789 suggestions, and that's definitely not something I would expect.
There aren't any other flags you need to be passing in. As far as I can see you're doing everything correctly.
There is, however, a flag that might provide some useful debugging information. If you could start the server with the flag --instrumentation-log-file=path
(where 'path' is a path to a temporary file), then it should write to the file all of the communications between the client and the server. If you could send us that file it would allow us to see what suggestions it's sending back so that we can try to understand why it's sending so many.
@bwilkerson,
Instrumented log files attached.
Not run on the same machine, I don't know how to switch between flutter versions.
Flutter v1.12.13+hotfix.8
run on Catalina Macbook, instrumentation log size 1.8MB.
Flutter v1.17.3
run on Linux Mint 19.3 desktop, instrumentation log size 6.9MB.
@bwilkerson - Looking at just the "label"
fields, a few things stand out to me:
{"label"::"DecorationPosition","kind"::13,"sortText"::"999993","data"::{"file"::"/home/dennis/projects/flutter/test_app/lib/main.dart","offset"::220,"libId"::404,"displayUri"::"package::flutter/material.dart","rOffset"::217,"rLength"::3}},
// ----8<-----
{"label"::"DecorationPosition.background","kind"::13,"sortText"::"999993","data"::{"file"::"/home/dennis/projects/flutter/test_app/lib/main.dart","offset"::220,"libId"::404,"displayUri"::"package::flutter/material.dart","rOffset"::217,"rLength"::3}},
{"label"::"DecorationPosition.foreground","kind"::13,"sortText"::"999993","data"::{"file"::"/home/dennis/projects/flutter/test_app/lib/main.dart","offset"::220,"libId"::404,"displayUri"::"package::flutter/material.dart","rOffset"::217,"rLength"::3}},
Why are we sending items like DecorationPosition.background
? I would expect to send only DecorationPosition
, and then after the user types .
send a list including background
and foreground
. I don't think most users expect to autocomplete past a trigger character like that. This shows up in both logs, and I'm not sure how strongly it contributes to the problem.
{"label"::"xxxx","detail"::"int","sortText"::"999997","data"::{"file"::"/home/dennis/projects/flutter/test_app/lib/main.dart","offset"::220,"libId"::58,"displayUri"::"dart::typed_data","rOffset"::217,"rLength"::3}}
// ----8<-----
{"label"::"xxxx","detail"::"int","sortText"::"999997","data"::{"file"::"/home/dennis/projects/flutter/test_app/lib/main.dart","offset"::220,"libId"::58,"displayUri"::"dart::typed_data","rOffset"::217,"rLength"::3}}
This is an API that doesn't show up in the log from flutter 1.12. It shows up twice in the same response in flutter 1.17. I see at least 14801 repeated labels in the 1.17 log, and only 127 in the 1.12 log (some repeated labels may be valid if they are imported from different places?)
(I was surprised to find it's a real name in dart:typed_data
)
Some APIs such as momentumRetainStationaryDurationThreshold
even show up 3 times.
{"label"::"WasmFunction","kind"::7,"sortText"::"999997","data"::{"file"::"/home/dennis/projects/flutter/test_app/lib/main.dart","offset"::220,"libId"::715,"displayUri"::"dart::wasm","rOffset"::217,"rLength"::3}}
As far as I'm aware this is something internal to the SDK and should not be importable by user code.
27,000 - 15,000 possible duplicates still results in 12,000 completion items which is still an awful lot. Heck 5,600 in Flutter 1.12 was enough for me to open up a support request about auto-completion stutter.
Ideally for Nate's pure VimScript LSP client LSC completions should be 1,500 or under, way under being better.
Actually I would want the term already entered by the user, 3 characters in LSC's case before triggering auto-completion, to be sent to the Analysis Server for server-side filtering. I don't know the Language Server Protocol enough to know if that is even possible (maybe it isn't). If it is, it would greatly ease the burden on this LSP client which is implemented in a very slow language.
Golang supports custom options for completions. https://github.com/golang/tools/blob/master/gopls/doc/settings.md.
These flags may help:
Since there's been no response to LSP or VS Code issues about filtering by prefix, I'm planning to assume the behaviour of VS Code is the expected behaviour of all clients (and that if the spec is clarified, it would be to match that). This means we can filter by prefix, which in most cases (except where you invoke completion with no prefix) should remove a significant number of completions (discussion in https://github.com/dart-lang/sdk/issues/42152).
Unnecessarily long completions Why are we sending items like DecorationPosition.background?
I think taking away enum values would hurt usability. If you have code like:
WidgetFoo(decorationPosition: ^
It feels natural to want to type the name of the enum value (eg. background
) rather than again typing DecorationPosition
. Having the values all show up here allows you to just type "background" and have DecorationPosition.background
show up.
If the number of these is huge, perhaps it could be optional, though I think with the change mentioned above it might be unnecessary.
Repeated completions (way worse in 1.17)
I'll see if I can repro this - if they're exact dupes, that's definitely a bug.
Ok, I think I tracked down these dupes:
{"label"::"xxxx"
There are actually multiple xxxx
's defined, but they're static members of classes and should not have been included unless you'd typed the class prefix. This is fixed by 994c0e8dad6df290b1919b383171f4c5bf628cc4 and I can't repro on master.
Some APIs such as
momentumRetainStationaryDurationThreshold
even show up 3 times.
This will also disappear with the fix above (it's a static member that shouldn't have been included), however the reason it appeared multiple times is different - the class that contains it is re-exported in three different libraries. If you haven't already imported any of them, then it's valid to show up 3 times as you're able to select which library you want to import it from (once you've already imported a library that has it, it will no loner show up from the others).
I'm working on a change to filter by prefix server-side (https://dart-review.googlesource.com/c/sdk/+/150628), and in my testing it seems to make a very significant difference. I'll ping when it's finished/merged.
@bluz71 Thanks for sending the logs! @natebosch Thanks for analyzing the logs!
Unnecessarily long completions
Why are we sending items like DecorationPosition.background?
I think taking away enum values would hurt usability.
The prevailing winds (including several teams outside the Dart team) seem to be to provide increasing numbers of multi-token completions under the assumption that it will improve the usability of the system. I think this would be a good area for some UX research (@InMatrix).
If the number of these is huge, perhaps it could be optional, though I think with the change mentioned above it might be unnecessary.
I agree that we should wait to see whether such an option is necessary. If it is, then I'd suggest that it be based on "single token" vs "multi token" completion styles, as that appears to me to be the real factor behind the option.
Repeated completions
@jwren and I are also working on the suggestion generation in server and tackling some issues like this there.
... however the reason it appeared multiple times is different - the class that contains it is re-exported in three different libraries.
There's an outstanding request for us to be able to intelligently select the right library from which a name should be imported, especially within the Flutter SDK. There's been some discussion, but no work started yet. That arc of work should help with this issue.
But aside from cases where users might legitimately want to import an element from one of several possible libraries, there shouldn't be any duplicates.
APIs that should be private to SDK
As far as I'm aware, the analyzer still looks in lib/_internal/libraries.dart
to discover the libraries in the SDK, and that file contains the following:
"wasm": const LibraryInfo("wasm/wasm.dart",
categories: "Server", maturity: Maturity.EXPERIMENTAL),
My understanding is that internal libraries are supposed to be marked as documented: false
, but maybe analyzer should also be looking at the maturity level. Do you know who we can contact to clarify this?
My understanding is that internal libraries are supposed to be marked as
documented: false
, but maybe analyzer should also be looking at the maturity level. Do you know who we can contact to clarify this?
I'm not sure - @vsmenon might know
Sorry, not completely caught up on the context here, but I don't think dart:wasm
is active at this point. You could add documented: false
and send it to the owner for review.
I'm working on a change to filter by prefix server-side (https://dart-review.googlesource.com/c/sdk/+/150628), and in my testing it seems to make a very significant difference. I'll ping when it's finished/merged.
With pure VimScript LSP clients such as vim-lsc and vim-lsp the performance benefit will be tremendous.
So I am very pleased that it is a work in progress. Thank you and the team.
Just out of curiosity, what will be the exchange required between client and server for this to function correctly?
Client sends prefixed request ---> what type of LSP event is expected and in what format?
Server sends back filtered list ---> I assume no change from the existing format for completions?
It seems to me that certain LSP clients may, likely will, need to be changed to support this. So clarification will be appreciated.
Lastly before I forget, can you list the characteristics of this prefix matching; strict prefix, substring, fuzzy, case sensitive? For me, strict prefix case sensitive is ideal; but I am sure others will differ on this.
Best regards.
Just out of curiosity, what will be the exchange required between client and server for this to function correctly?
There should be no change required from any clients - the server knows the state of the current document (because of textDocument/didChange
) and can use the offset to look at the identifier directly behind the supplied offset.
Lastly before I forget, can you list the characteristics of this prefix matching; strict prefix, substring, fuzzy, case sensitive? For me, strict prefix case sensitive is ideal; but I am sure others will differ on this.
I believe it's fuzzy and case-insensitive (the fuzzy matcher already existed so I'm just reusing it for LSP). You can see some of the tests at https://github.com/dart-lang/sdk/blob/f7f0e2b51bbc50f6c8e502c8bde44ee3806912e9/pkg/analysis_server/test/src/services/completion/filtering/fuzzy_matcher_test.dart#L277.
My understanding is that clients can/will still filter client-side (as they do today - since they don't know if the server applied prefix filtering), so if your editor has additional settings for the client-side filtering, they should still work (for ex. VS Code has some options to enable/disable allowing simple typos to match- if you turn that off, then it'll do exact filtering even if the server supplied options that matched based on typos).
Great news, so no change needed client side.
The LSC plugin (Vim LSP client) already does client-side filtering, it bubbles up strict prefix case sensitive matches to the top of the completion menu.
These upcoming Analysis Server enhancements and fixes will significant. Looking forward to their availability.
Some interesting results from LSC 303 comparing various language servers
Two tests were undertaken, counting the number of server-provided completion items when completing a language keyword and also when completing a library type after typing a 3 character prefix.
Language | Language Server | Keyword Matches | Type Matches |
---|---|---|---|
C++ | clangd |
100 |
100 |
Dart | Analysis Server / Flutter 1.12 |
5,630 |
5,619 |
Dart | Analysis Server / Flutter 1.17 |
26,789 |
26,777 |
Go | gopls |
4 |
4 |
JavaScript | Microsoft TypeScript LS | 1,449 |
1,203 |
JavaScript | Sourcegraph JavaScript LS | 1,083 |
1,083 |
Ruby | Solargraph | 4 |
3 |
Rust | rust-analyzer |
206 |
206 |
Swift | SourceKit-LSP | 33,216 |
33,216 |
TypeScript | Microsoft TypeScript LS | 1,371 |
1,106 |
Less is more :wink:
@bluz71 a few of the improvements have made it through to the Flutter master channel and it seems to have improve things for VS Code Vim users - can you see if it's made much difference for you? Thanks!
@DanTup,
Thanks for the prompt.
I typically use the Dart that ships with Flutter; but in this case I temporarily downloaded the latest Dart SDK Dev release 2.9.0-19.0.dev (ref a38037f)
. Hopefully that contains all the changes referenced in this issue.
Tested on two machines:
Raw match count, same test as my last post:
Language | Language Server | Keyword Matches | Type Matches |
---|---|---|---|
Dart | Analysis Server / Flutter 1.12 |
5,630 |
5,619 |
Dart | Analysis Server / Flutter 1.17 |
26,789 |
26,777 |
Dart | Analysis Server / Dev 2.9.0-19.0 |
863 |
572 |
Night and day performance difference.
The interactive performance via the Vim LSC plugin is now smooth as silk. No stall, no stutter unlike the previous Flutter releases; and that even includes when run on the very slow Macbook. It feels as fast as Microsoft's TypeScript language server and the gopls
Go language server.
No doubt about it, a tremendous result.
I would say this is now a solved issue.
Side note, all language servers should try and bound their match counts (say under 1,000), it makes a huge difference client-side in the editor.
Many thanks to the Dart team, can't wait for it to roll into an upcoming Flutter release.
I checked 2.9.0-19.0
dart-sdk with emacs(lsp-mode) and it works great!!!
It works very smooth and increased my fun of writing dart by 10 times.
Many thanks to dart team and @DanTup
@bluz71 @itome thanks for testing - I'm glad to hear the changes made a reasonable difference!
If you spot perf issues anywhere else, please do raise them here (and cc me) - with logs where possible.
(@itome I don't have permissions here to close this, but if you're happy with the result I think you can do it. Thanks!)
made a reasonable difference
Underselling the benefit, these changes make a huge difference. You and the team should be proud, great work!
Thank you very much @DanTup, I'll link this fix on some lsp-dart issues
By the way - I added a note in the docs about passing --client-id
and --client-version
:
If you have plugins that spawn the server, it would be useful to pass these so there's some visibility of where the LSP server is being used.
I'll change lsp-dart
from Emacs to use it, thank you!
Added to lsp-dart
the flags --client-id
and --client-version
here on 1.12.6
release
@ericdallo thanks!
@DanTup
Hello, I'm using dart lsp for emacs, and writing flutter code. Dart analysis server returns so large json response for
textDocument/completion
that emacs freezes while parsing json file for 10~ seconds Do you have some idea to reduce it?I checked the json response and noticed that
documentation
is included to response.Align(
↑ send completion request here. )