Open BurntSushi opened 5 years ago
I'm no longer using ALE, but I tested your scenario in VS Code and LanguageClient-neovim.
The Code extension runs cargo check
, so I left it a couple of seconds to finish after opening the project. I then opened prog.rs
and quickly Alt-clicked on the InstPtr
at line 24. rust-analyzer
is lazy and only processes the code when needed, so the first request is slower (~7s). The second go to definition request on the same symbol completes instantly, and so does one on Inst::Match
at line 121. If I restart Code, open the file and wait for it to finish the analysis, even the first request completes instantly.
With LanguageClient-neovim, the first request takes about 8 seconds to complete. Following go to definition requests in the same file are either instant or take 2-3 seconds to finish. Waiting after opening the file doesn't help. There is no need to save the file for it to be processed.
If I do save the file, it gets processed in full. Go to definition has the same delay of around 8 seconds. If I save the file a second time with no changes, go to definition works instantly; same after inserting an empty line.
I think the Code extension requests some eager analysis for the file (maybe because of the highlighting) that the VIM extension does not. Besides that they behave the same: the first request can be slow, and later ones finish instantly if the file was fully processed.
Maybe the behaviour you're seeing is caused by ALE?
Right, the first request it slow, but subsequent requests after that are fast. That's what I meant by latency of opening a file using jump-to-definition.
I don't have enough knowledge to know whether this is being caused by ALE or not, but RLS has very little latency even when used with ALE.
So you're only seeing this issue on the first go to definition request in each file, until closing the editor? I think that's normal. RLS tends to run eagerly, while rust-analyzer is lazy. I see two three ways to improve on this:
DidOpenTextDocument
notificationra_cli analysis-stats
does), with a lower priorityIDEs like eclipse or IntelliJ IDEA tends to perform indexing of the entire project on open, mostly in background. After indexing is finished, anything works instantly.
IDEs like eclipse or IntelliJ IDEA tends to perform indexing of the entire project on open, mostly in background. After indexing is finished, anything works instantly.
I think this is intentional, to make it work better on very large projects (think of the compiler). Saving the analysis results to a file would be great, bringing both of best worlds. We could load them, then check for modified files in the background. But I'm not offering to implement this.
So you're only seeing this issue on the first go to definition request in each file, until closing the editor?
Hmm, I don't think it's a per-file thing. Once the first goto definition completes, subsequent goto definition requests complete almost instantly, as long as they are within the same crate. (If I move to a different crate in the same Cargo workspace, then I see the long latency again. And again, after the first request completes, subsequent goto definition requests for that crate complete almost instantly.)
I think that's normal. RLS tends to run eagerly, while rust-analyzer is lazy.
To be clear, I think this is a bug regardless of whether it's expected behavior or not. I don't know whether RLS is saving an index to disk or not, but even though RLS generally takes longer to settle down after starting for a project, it still services goto definition requests nearly instantly.
Hmm, I don't think it's a per-file thing. Once the first goto definition completes, subsequent goto definition requests complete almost instantly, as long as they are within the same crate. (If I move to a different crate in the same Cargo workspace, then I see the long latency again. And again, after the first request completes, subsequent goto definition requests for that crate complete almost instantly.)
This is because ALE was configured to set project root to the nearest Cargo.toml
, for workspace setup, this is not correct. I was having same problem with LanguageClient-neovim, changing the root finder to find Cargo.lock
resolved the problem for me. With that config, only first request will be slow for the whole project, which is acceptable I think.
Anyway, storing index to disk should be a feature of rust-analyzer, what do you think @matklad ?
Thanks for the report!
I have a hypothesis for why RLS is fast and rust-analyzer is slow in this case.
RLS has a fast-path for when save-analysis is not ready. Specifically, it uses racer to serve goto definition in this case, which gives fast, but approximate results.
On the other hand, rust-analyzer always uses precise (but, of course, quite incomplete at the moment) analysis, and it needs some time for initial analysis.
At this moment, I fear that several second's delay for the first action after opening a project is a deliberate trade off. That is, the imagined workflow is that you open a project in your editor of choice, spend about 10-seconds without smart IDE features, but, after that, everything is instant until you stop working on the project. From your issue description it seems that you hit this "initial loading" path quite a bit more often than I expect. @unrealhoang explanation seems plausible: if the client creates a fresh server instance for each package of n
in the workspace, you'll see slow-down n
times (and the memory usage will be n
times larger as well). Another explanation could be that, when you work in vim, you close and open the editor for different files, and that doesn't allow vim to persist the analyzer process between the sessions. These two problems could be addressed I think, but I'll need a conformation that they are indeed real culprit here.
Long term, I have a couple of ideas how to make initial processing faster.
First, at the moment rust-analyzer deliberately does not persist any analysis results to disk, and does a from-scratch analysis on start-up. This is done in order to avoid complexity of IO and state reconciliation. It also pushes us to make initial analysis acceptably fast :) Long term, we should implement persistence, by either adding on-disk storage to salsa, or by adding .rmeta
files as alternatives to from-source analysis.
Second, the core of the problem here is that rust's name-resolution rules are hard. Naively, one would expect that, in prog.rs
's case, all that the IDE needs to do is to parse this single file and figure out that the definition is couple of lines above. However, in the worst-case, because of macros and glob imports, name resolution works on the crate granularity, and requires a fixed-point iteration algorithm. Currently, rust-analyzer implements only the worst-case algorithm, so, during that 5-7 seconds delay, rust-analyzer process each module of regex, core, std, liballoc, etc. It seems like it should be possible to implement some kind of fast-path (if the module doesn't have glob imports and macros, don't process the whole crate), but it's unclear how to do that correctly and without duplicating the logic.
Third, we can do something like RLS, and implement explicit dumb mode, which works until the analysis is done, but might give you incorrect results.
So, yeah, TL;DR is that it's won't-fix in the short term, but I am curious specifics of Vim here, because it seems like it hits from-scrach analysis more often than it should.
Also, one short-term fix we can add here is to kick analysis as soon as the user opens a file, as opposed to waiting until they actually invoke goto definition.
@matklad Thanks for the explanation! I appreciate it.
but I am curious specifics of Vim here, because it seems like it hits from-scrach analysis more often than it should.
Everyone's workflow is likely to be different, but I do generally keep one vim instance open for each repository I work on. That vim instance typically stays open for quite some time. So paying an initial cost there and then having effectively instant results isn't too much of a burden.
The problem is that I spend a lot of time reading code. Checking out a repository, opening some files and reading and understanding code in that repository is a fairly common thing for me. Each time I clone a repository, I'll open some files in vim. When I open those files, I do it because I want to try to read and understand some portion of code. I'll inevitably, at some point, utilize goto definition to find the definition of some type, but become frustrated when it doesn't work.
So I guess teasing this apart, there are two issues here:
To be clear, as I said before, the extent to which these are problems with the client vs the server is not clear to me, since I'm not familiar with LSP internals. I'm just trying to describe the problem I'm seeing as an end user. :-)
Also, one short-term fix we can add here is to kick analysis as soon as the user opens a file, as opposed to waiting until they actually invoke goto definition.
This would probably help in some non-trivial number of cases, yes.
The concerns about I/O synchronization are definitely appreciated. That's a huge pain to get right. And having a second fast path is also annoying. However, in my experience, these little UX bugaboos are important to have work well. I don't know whether my workflow is representative, but I wouldn't be surprised if it was, at least for vim users.
Yeah, the 2. definitely seems like something that shouldn't be happening. Could you share a minimal vim config with the plugin you are using? I'd love to dig into this, but it's hard for me to reproduce it myself, as I am not a vim user :)
I am also curious if @unrealhoang suggestion helps with this problem:
If I move to a different crate in the same Cargo workspace, then I see the long latency again
If that's the case, we might want to adjust our docs for vim setup.
The issue of the first failing request doesn't happen in Code or LanguageClient-neovim, so my guess is that it's a client problem.
@lnicola even if it's a client problem, I am still interesting in debugging and fixing it :-)
I'll try to work on getting a confirmed minimal vim config for you when I get home today. However, I can just post the relevant portions of my vim config now:
call plug#begin('~/.vim/plugged')
Plug 'w0rp/ale'
let g:ale_linters = {'rust': ['cargo', 'rls']}
let g:ale_rust_rls_executable = 'ra_lsp_server'
let g:ale_rust_rls_toolchain = 'stable'
let g:ale_rust_rls_config = {
\ 'rust': { 'clippy_preference': 'off' }
\ }
let g:ale_lint_on_enter = 0
let g:ale_lint_on_filetype_changed = 1
let g:ale_lint_on_save = 1
let g:ale_lint_on_text_changed = 'never'
let g:ale_lint_on_insert_leave = 0
let g:ale_completion_enabled = 0
call plug#end()
The relevant goto definition command is :ALEGoToDefinition
.
Note that the above config assumes that you have vim-plug installed. Once the above is in your vim config, then run :PlugInstall
. (You can keep a clean vim install afterwards by removing the ALE config above, and then running :PlugClean
, which should delete it.)
Also note that I am not particularly attached to any particular LSP client. I tried several of them (including LanguageClient
) but settled on ALE for reasons that I can't remember. When I get home, I'll try out some of the other vim LSP clients and see if they have the same problem.
@BurntSushi from your config, I can see that ALE does not start language server when you open a file, as let g:ale_lint_on_enter = 0
. Also, I just found from ALE's source code:
https://github.com/dense-analysis/ale/blob/135de34d22/ale_linters/rust/rls.vim
It's indeed using Cargo.toml
for project's root, which is not working as intended for cargo workspace projects. Unfortunately, that is not configurable for ALE.
You can either patch it (to Cargo.lock
) or use different language client.
Minimal config for LanguageClient
to support go to definition:
call plug#begin('~/.vim/plugged')
Plug 'autozimu/LanguageClient-neovim', {
\ 'branch': 'next',
\ 'do': 'bash install.sh',
\ }
call plug#end()
let g:LanguageClient_serverCommands = {
\ 'rust': ['rustup', 'run', 'stable', 'ra_lsp_server'],
\ }
let g:LanguageClient_rootMarkers = {
\ 'rust': ['Cargo.lock'],
\ }
" You can map differently, of course.
nnoremap <silent> gd :call LanguageClient_textDocument_definition()<CR>
You can control the Language Server start up yourself by:
let g:LanguageClient_autoStart = 0
And start LanguageServer manually by calling
:LanguageClientStart
Ah interesting, thanks for catching my config error. I must have disabled that at some point, probably because of RLS.
I'll also take a look at LanguageClient
too. Thanks!
I'm using vscode and goto definition is always slow (it takes time on start to enable this, but even after it's ready it takes ~5 seconds each time to follow definition). Is that expected or I have some wrong configs?
@kanekv the first requests will be slower because rust-analyzer
is lazy and doesn't parse and analyze the whole project on startup. But the next ones should be quite fast (instant on a small project) if they touch the same files.
In Code there's a "Rust Analyzer: Status" command that shows how much time the last LSP requests took, and there's also the profiling support if you think you've found a bug.
I've tried to follow same method twice, one immediately after another:
753 textDocument/codeAction 3651ms
751 rust-analyzer/inlayHints 4781ms
756 rust-analyzer/inlayHints 1933ms
760 textDocument/hover 40ms
764 textDocument/hover 0ms
* 745 textDocument/codeLens 0ms
744 textDocument/codeAction 0ms
748 textDocument/codeAction 0ms
749 textDocument/codeAction 0ms
739 rust-analyzer/inlayHints 2570ms
And 2nd time:
832 textDocument/definition 2604ms
* 813 textDocument/codeAction 0ms
818 textDocument/codeAction 0ms
820 textDocument/definition 0ms
823 textDocument/foldingRange 0ms
824 textDocument/codeLens 1ms
826 textDocument/codeLens 0ms
829 textDocument/codeAction 3590ms
828 textDocument/documentHighlight 3832ms
822 rust-analyzer/inlayHints 4807ms
I'll try to profile if this is not expected behavior.
P.S. I've noticed that it seems it only caches last definition, if I don't move mouse over other methods it works instantly. If I hover one method, then another and then come back to the first one it shows Loading...
again and takes ~5 seconds.
That's certainly not what I'm seeing while editing rust-analyzer
itself. Most definition
and resolve
requests are 0ms for me, with hints and decorations taking ~200ms. I even tried hitting F12 instead of using the mouse to prevent the symbol from getting looked up on hover.
924 textDocument/codeAction 1ms
926 rust-analyzer/inlayHints 1ms
925 rust-analyzer/decorationsRequest 10ms
927 textDocument/foldingRange 3ms
928 textDocument/codeLens 4ms
929 textDocument/codeLens 4ms
930 textDocument/codeAction 1ms
931 codeLens/resolve 2ms
* 921 textDocument/codeAction 3ms
922 textDocument/definition 0ms
It does get slower when cargo check
is running, but I assume you would have noticed it. It's also noticeably slower on large projects like rust-lang/rust
. FWIW, I'm on a Linux middle/high-end laptop, but with an SSD.
@lnicola I do have cargo watch enabled (it seems on mac os it's a must, otherwise lsp process eats 100% of cpu). I have top last year model with i9 cpu and 32gb ram.
I do have cargo watch enabled (it seems on mac os it's a must, otherwise lsp process eats 100% of cpu).
Ugh, try enabling rust-analyzer.useClientWatching
unless you don't already have it enabled. There are some issue with file watching on MacOS, but I don't think the cargo watch
integration has any effect here. Or maybe you're making a confusion between that and rust-analyzer.enableCargoWatchOnStartup
. The latter (which is the one I was referring to) just runs cargo check
on startup and when you save a file. It does slow down RA while cargo check
is running.
I have both enabled. It seems my numbers are way off, any ideas how to help fixing it?
First step here might be to create a separate issue, b/c it looks like in this case we are observing a specific bug, rather the general slowness due to how initialization works architecturally, at the moment.
If this issue happens only in a specific project, it would be great to get a link to this project.
On Mon, 11 Nov 2019 at 12:03, Vyacheslav Kim (Kane) < notifications@github.com> wrote:
I have both enabled. It seems my numbers are way off, any ideas how to help fixing it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rust-analyzer/rust-analyzer/issues/1650?email_source=notifications&email_token=AANB3M5YU2LC5LI6LSH3JX3QTENU5A5CNFSM4IJFLWWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDWD44I#issuecomment-552353393, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANB3M5DW6ZHMBEGLPZLU3LQTENU5ANCNFSM4IJFLWWA .
Some basic steps I would try:
cargo check
to finish, if it's the first time you're opening the project@matklad @lnicola Looks like I've got more interesting info: when I go to definition using hotkey (F12) it works pretty fast (less than a second). When I try to Cmd+click in this case it takes a long time and shows Loading...
tooltip before enabling hyperlink and ability to click.
A Cmd+click will first activate hover before goto definition
Yeah, is hover supposed to be slow? Should I open another issue about it?
Please do.
ctrl-click activating a hover first - is that desired behavior? or is it that the hover is set in flight before someone has time to click? I must admit I find the hover kicks in a little bit too soon at the moment and things start appearing when I'm not really intending for a hover to happen. Feels a little 'jumpy'. If the hover was a tad slower to kick off you might get your ctrl-clicks in before hand :-)
@gilescope it is up to the language server client (the common case appears to be vscode
) there is probably a setting somewhere to change the timing.
Bigno: "editor.hover.delay": 300
Nice. Sorry it’s all so seamless it’s hard to tell where one bit stops and the next bit starts!
Sent with GitHawk
I've been trying rust-analyzer off and on to see if this issue has improved. It was bad enough where I went back to rls. I'd actually rather deal with slowish rls build times in exchange for lower latency goto-definition. But it looks like rust-analyzer is doing a lot better here now as of current master (bd4ea87f7442541123e3bbd7e17bfecdfb3c18c6). In particular:
g:ale_lint_on_enter
.I'm going to give rust-analyzer another shot. I do think this issue should remain open though, because it would be great to be able to open a file and have goto-definition work almost instantly.
It also seems like even though I have ale_lint_on_enter
, my first goto-definition request always has high latency, even if I have opened the file for a few minutes. Is there anyway to force RA to build its goto-definition index once it starts? That would go a long way toward making the latency problem better.
(It seems like there should in theory be a way for me to hack around this on my end, e.g., sending a phantom goto-definition request whenever my language client plugin starts rust-analyzer, but it would take me quite a bit of time to figure out how to do that.)
Is there anyway to force RA to build its goto-definition index once it starts?
We should just fix that on our side. More generally, I believe there's some low-hanging fruit around optimizing the latency for this use-case: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0/topic/slow.20first.20query/near/188332605. Someone (probably me) should just get to this one day...
Not really, as there is no index per se. It happens lazily: you ask for a definition and we look for an use statement that imports it. Then, we load that file, look for the item and repeat, all while caching everything. So if the second request is fast, that's because it touches the cache.
In Code this is handled nicely because we colorize the file and show some inferred types by default, but other clients don't do that. I guess we could send a request for the inlay type hints (using ale#lsp_linter#SendRequest
), but that's client-specific.
@Inicola Thanks for the reply! And sure, yeah, I'm speaking as an end user here. "force RA to populate the cache" up front would be a fine thing to do. I didn't mean to get too caught up in the underlying implementation. :-)
In Code this is handled nicely because we colorize the file and show some inferred types by default, but other clients don't do that. I guess we could send a request for the inlay type hints (using ale#lsp_linter#SendRequest), but that's client-specific.
Sorry, I'm not following here. What does colorizing and inferred types have to go with goto-definition latency?
Or are you saying that, in Code, there is already an interaction in place that happens when you open a file that also happens to prime the cache used by goto-definition? And so in that environment, the issue doesn't manifest as it does for me.
Or are you saying that, in Code, there is already an interaction in place that happens when you open a file that also happens to prime the cache used by goto-definition? And so in that environment, the issue doesn't manifest as it does for me.
Exactly. Triggering those features forces RA to run name resolution for the entire file. And since name resolution is what go to definition needs, the cache is primed automatically.
is it possible to generate a static database once for whole cargo project? It will be similar to what cscope
does for C projects. Best if rust-analyzer could provide a vim cscope interface.
@piping This is basically what #3098 aims to do
I think #3474 should have helped with this.
Can we use the tags file in vim env? It might be helpful since the tags file can persist in the disk.
Is there anyway to force RA to build its goto-definition index once it starts?
We should just fix that on our side. More generally, I believe there's some low-hanging fruit around optimizing the latency for this use-case: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0/topic/slow.20first.20query/near/188332605. Someone (probably me) should just get to this one day...
Is this work being tracked anywhere?
Is there anyway to force RA to build its goto-definition index once it starts?
This is now done. You should see an "indexing" progress indicator on startup.
First, at the moment rust-analyzer deliberately does not persist any analysis results to disk, and does a from-scratch analysis on start-up. This is done in order to avoid complexity of IO and state reconciliation. It also pushes us to make initial analysis acceptably fast :) Long term, we should implement persistence, by either adding on-disk storage to salsa, or by adding .rmeta files as alternatives to from-source analysis.
Is this on the roadmap in the near/mid future? I run into this a lot with rustdoc, it takes multiple minutes to startup from scratch, and because I switch branches a lot, I spend more time without being able to use ~any IDE features than with them.
There are still a lot of lower-hanging fruits to pick, for example:
I am planning to look into at least the second point in the coming weeks.
I see, that's unfortunate :/ I'm sure all of those improvements are useful and make a big difference on small codebases, but that means RA will still take multiple minutes on rustdoc.
It turns out the issue is I was using sshfs for developing remotely. https://code.visualstudio.com/docs/remote/ssh and https://code.visualstudio.com/docs/remote/ssh#_managing-extensions are about 100x times as fast and this is no longer a concern for me :)
It's been a while, what's the current status of this issue ?
So I was finally able to get
rust-analyzer
working in vim (with ALE), and it appears that fixing #1474 did the trick. So thank you! Working in crates likeregex
, I can definitely notice the speed improvements. RLS can take quite some time to catch up to changes in the source code, but rust-analyzer is almost instant.I did, however, find a place where RLS appears to be much better: the latency at which go-to definition works. Here's the case I'm fiddling with right now:
rust-lang/regex
repo.src/prog.rs
, go to line 24 and put the cursor overInstPtr
.:ALEGoToDefinition
as quickly as one can after saving. (Presumably, this problem isn't specific to ALE, so I guess replace this step with whatever command lets you jump to a declaration in your environment.) This should move the cursor just a few lines up to where theInstPtr
type is defined.When I do this for RLS, (4) succeeds pretty much instantly, even though RLS is still pegging my CPU. Presumably, RLS builds whatever index structure it needs for goto definition first, and is then able to use it even though it's still doing other work. (This is a guess based on observed behavior. I'm not familiar with RLS internals.)
However, when I do this for rust-analyzer, it takes about 5-7 seconds for the goto definition to actually move my cursor to the declaration site. Ideally, this should be as fast as RLS.
The overall goal I'm requesting here, I think, is to minimize the latency at which goto definition works after opening a file. This is a fairly common workflow for me personally, especially in code projects that I'm unfamiliar with, which lets me jump around to definition sites as quickly as I want.
Note that an alternative sequence of steps from the above set is to simply run
:ALEGoToDefinition
twice. The first time causes the language server to start, and the second time actually allows the language server to respond to the request. Now, with RLS, it seems like I can run goto definition twice as quickly as I want, and it will always succeed on the second request. But with rust-analyzer, I have to wait a second or two after the first press, otherwise the second request seems to just get ignored. Once the second request is made (again, after waiting for rust-analyzer to do its initialization), it is reliably successful, but only after 5-7 seconds, as with above.Now, ideally, I could open a file, issue goto definition and have that succeed almost immediately. However, needing to do it twice (or save the file first) is an acceptable work-around to me personally. The much more important thing here, IMO, is minimizing overall latency. Moreover, I also understand that needing to do these key presses twice might not be a problem with the server, but rather the client. So it's less clear whether it's actually a bug in rust-analyzer or not.
Hopefully this is enough info to go on. These bug reports feel like they are super hard to work through. :-) My hope is that the latency question isn't specific to my setup, and it can be reproduced in other environments.