Closed matklad closed 4 months ago
I wonder if it would be possible to have some sort of LRU caching for the compressed source, where you compress everything, but things that are being frequently changed, may stay in memory uncompressed to avoid unnecessary compression/decompression. I guess it also depends on the compression itself, what kind of overhead it has.
Why is the source stored at all? Can't it be read from disk as needed?
@jrmuizel it's important to let no arbitrary IO into the core incremental computation. We can't guarantee that reading a file twice will yield the same result, and, if we get different results in the same incremental session, we'll be in an inconsistent state.
What should be possible is to "copy" files to some ".rust-analyzer" dir and read them from there, with a contract that an IO error while reading from this private to rust-analyzer dir is fatal and requires restart.
Overal spending 50 megs of RAM to store text seems much better deal than dealing with IO in any form. A good thing about compression is that it gives use memory savings in a purely-functional context.
I wonder if it would be possible to have some sort of LRU
The simplest form of LRU is "compress everything once in a while". This is what we do for syntax trees, and it seems to work.
How does ra_vfs
relate to the compression, ra_vfs
monitors the files and has them in memory right? So should the compression happen already in ra_vfs
?
Yeah, I think so!
Currently, Vfs stores text as text: Arc<String>
, and it could be changed to a more abstract types. I can't sketch the whole design off the top of my head, I expect there will be some interesting questions about lifeimes and interior mutability. If we are introducing a new type for this, it might also be a good occasion to switch LSP layer to patch files with edits on modifications, instead of asking the client for the whole text buffer every time.
Hi! I've noticed that a lot of non-Rust files (LICENSE, AUTHORS, Dockerfile, COPYING, .gitignore, etc.) are included in the salsa db. Is this by design?
@marcogroppo that's definitely a bug, only .rs
files should be included
found it:
Here, we include extension-less files. This is so that we don't ignore directories. We should probably do additional filtering somewhere on io layer to filter-out extension less files.
I did a quick check and with the ra_vfs patch the memory occupied by rust-analyzer's source code is now 3586 (38mb) files
(without the patch: 4305 (47mb) files
). Another thing I've noticed is that the source code includes tests, benchmarks and examples from libcore and other dependencies
This seems like an interesting idea but one should note that some operating systems already compress memory pages when under pressure (macOS by default, Linux with zram).
I think once we can properly ignore files that are not necessary, like tests, benchmarks or examples from external sources, the amount of files should be reduced even further.
I think once we can properly ignore files that are not necessary, like tests, benchmarks or examples from external sources, the amount of files should be reduced even further.
I think we should extend vfs API to allow to specify exclusion together with the roots. Than, we can change the logic in rust-analyzer to ignore tests|benches|examples
for crates from crates.io.
Could we use the ignore
crate for this?
I think we can use ignore
to at least get .gitignore
support, maybe we could use it to ignore other things as well ?
Yeah, using gitignore is fine!
We only need to think carefully about the interface between VFS and the the rest of the world, such that consumers could flexibly choose the strategy. Perhaps VFS should just accept a BoxFn, such that using gitignore is strictly consumer’s business?
Wouldn't it be good to include the examples, tests and benchmarks, so things like go to definition and find references keep working?
@lnicola for crate.io dependencies I think that is not important
Good point. But for the current project they are.
Something we've discussed with @Xanewok at zulip is that we can also fold parsing into the mix and have a three-state repr:
enum SourceState {
Compressed(Vec<u8>),
Decompressed(String),
Parsed(TreeArc<ast::SourceFile>),
}
the repr could change dynamically (so, an interiro mutability is required) depending on access patterns and memory usage. This should also allow us to incrementally reparse files
(Another) crazy idea: Store source code and other large (meta)data in a sqlite or similar database-in-a-file system. This would allow us to reduce memory usage while also having minimal effects on performance.
The new dependencies are not insignificant, but they would probably be acceptable. This approach also scales better for huge projects.
@spadaval this might work at the salsa
level, see https://github.com/salsa-rs/salsa/issues/10.
I gave this a try at the VFS level, using LZ4:
Run | RSS (MB) | CPU time (s) | Mem (MB) |
---|---|---|---|
before | 812 | 20.24 | 764 |
before | 812 | 18.69 | 765 |
before | 814 | 20.74 | 764 |
after | 841 | 19.76 | 751 |
after | 842 | 21.58 | 751 |
after | 813 | 19.54 | 751 |
Uncompressed source code is 43 MB. The tests consisted in starting Code with only RA's main.rs
open. I didn't use the custom dictionary feature of the LZ4 crate, but that might help a little, too.
Overall I'm not convinced this is worth it, what do you think?
Yeah, seems like it's not worth it at this time!
Thanks for quantifing the wins here @lnicola , that's super helpful!
@Veykril think we should revisit this? See table above.
Ye I think this would be good to revisit (vfs takes up ~100 mb on r-a for me currently)
Some updated baseline numbers after starting Code with main.rs
:
FileTextQuery
takes 59 MBI might suggest trying with a more modern compression algorithm like zstd instead of lz4 this time.
The issue with zstd is that it pulls in a lot of C code.
Anyway, I tried this again and the memory usage grew, so there's probably something weird going on that's not related to the compression.
@lnicola do you still have a branch where you tried this approach (if not, a description is totally fine!). I wanted to try it out with zstd where, for organizational reasons, it's substantially easier for me to bundle a bunch of C code.
(if it's successful, it would likely be a private set of patches I wouldn't send as a PR for the aforementioned "way too much C code" reasons.)
@davidbarsky yeah, I'll clean it up and rebase tomorrow, but it's pretty trivial.
I think at the time I actually did some tests against zstd (outside of RA, by compressing the files), but don't remember the results, but I think using a custom dictionary wasn't really worth it.
The other thing that's needed here is at least a one-item LRU cache, because without that we're going to keep recompressing the current file when the user is typing. I don't think we generally hit the VFS too much otherwise (except when switching branches). https://github.com/lnicola/rust-analyzer/tree/vfs-log adds some logging we can use to double-check.
https://github.com/rust-lang/rust-analyzer/pull/16307 makes this obsolete by dropping the file contents from the VFS.
We could still compress the contents in the salsa db, but I'm not sure how to implement that without thrashing on active set of files. Can queries change the inputs?
Crazy idea: source code occupies a non-negligible amount of memory. For rust-analyzer, it is
which actually is worse than I expected (could this be a bug? Do we include unrelated files?).
It might be a good idea to compress this code on the fly! Specifically, we can store text not as
Arc<String>
, but as an opaqueTextBuffer
object, which can compress/decompress large text on the fly. Specifically, we should compress all files after the initial indexing of the project, and decompress them on-demand.This shouldn't be to hard to implement actually!
To clarify, I still think it's a good idea to keep all the source code in memory, to avoid IO errors, but we could use less memory.