rust-lang / rust-analyzer

A Rust compiler front-end for IDEs
https://rust-analyzer.github.io/
Apache License 2.0
13.68k stars 1.5k forks source link

Compress rarely modified files #869

Closed matklad closed 4 months ago

matklad commented 5 years ago

Crazy idea: source code occupies a non-negligible amount of memory. For rust-analyzer, it is

4222 (47mb) files

which actually is worse than I expected (could this be a bug? Do we include unrelated files?).

It might be a good idea to compress this code on the fly! Specifically, we can store text not as Arc<String>, but as an opaque TextBuffer object, which can compress/decompress large text on the fly. Specifically, we should compress all files after the initial indexing of the project, and decompress them on-demand.

This shouldn't be to hard to implement actually!

To clarify, I still think it's a good idea to keep all the source code in memory, to avoid IO errors, but we could use less memory.

vipentti commented 5 years ago

I wonder if it would be possible to have some sort of LRU caching for the compressed source, where you compress everything, but things that are being frequently changed, may stay in memory uncompressed to avoid unnecessary compression/decompression. I guess it also depends on the compression itself, what kind of overhead it has.

jrmuizel commented 5 years ago

Why is the source stored at all? Can't it be read from disk as needed?

matklad commented 5 years ago

@jrmuizel it's important to let no arbitrary IO into the core incremental computation. We can't guarantee that reading a file twice will yield the same result, and, if we get different results in the same incremental session, we'll be in an inconsistent state.

What should be possible is to "copy" files to some ".rust-analyzer" dir and read them from there, with a contract that an IO error while reading from this private to rust-analyzer dir is fatal and requires restart.

Overal spending 50 megs of RAM to store text seems much better deal than dealing with IO in any form. A good thing about compression is that it gives use memory savings in a purely-functional context.

matklad commented 5 years ago

I wonder if it would be possible to have some sort of LRU

The simplest form of LRU is "compress everything once in a while". This is what we do for syntax trees, and it seems to work.

vipentti commented 5 years ago

How does ra_vfs relate to the compression, ra_vfs monitors the files and has them in memory right? So should the compression happen already in ra_vfs ?

matklad commented 5 years ago

Yeah, I think so!

Currently, Vfs stores text as text: Arc<String>, and it could be changed to a more abstract types. I can't sketch the whole design off the top of my head, I expect there will be some interesting questions about lifeimes and interior mutability. If we are introducing a new type for this, it might also be a good occasion to switch LSP layer to patch files with edits on modifications, instead of asking the client for the whole text buffer every time.

marcogroppo commented 5 years ago

Hi! I've noticed that a lot of non-Rust files (LICENSE, AUTHORS, Dockerfile, COPYING, .gitignore, etc.) are included in the salsa db. Is this by design?

matklad commented 5 years ago

@marcogroppo that's definitely a bug, only .rs files should be included

matklad commented 5 years ago

found it:

https://github.com/rust-analyzer/ra_vfs/blob/beac2769f48474a7dc33014a982614c5c13804ea/src/roots.rs#L97-L100

Here, we include extension-less files. This is so that we don't ignore directories. We should probably do additional filtering somewhere on io layer to filter-out extension less files.

marcogroppo commented 5 years ago

I did a quick check and with the ra_vfs patch the memory occupied by rust-analyzer's source code is now 3586 (38mb) files (without the patch: 4305 (47mb) files). Another thing I've noticed is that the source code includes tests, benchmarks and examples from libcore and other dependencies

killercup commented 5 years ago

This seems like an interesting idea but one should note that some operating systems already compress memory pages when under pressure (macOS by default, Linux with zram).

vipentti commented 5 years ago

I think once we can properly ignore files that are not necessary, like tests, benchmarks or examples from external sources, the amount of files should be reduced even further.

matklad commented 5 years ago

I think once we can properly ignore files that are not necessary, like tests, benchmarks or examples from external sources, the amount of files should be reduced even further.

I think we should extend vfs API to allow to specify exclusion together with the roots. Than, we can change the logic in rust-analyzer to ignore tests|benches|examples for crates from crates.io.

kjeremy commented 5 years ago

Could we use the ignore crate for this?

vipentti commented 5 years ago

I think we can use ignore to at least get .gitignore support, maybe we could use it to ignore other things as well ?

matklad commented 5 years ago

Yeah, using gitignore is fine!

We only need to think carefully about the interface between VFS and the the rest of the world, such that consumers could flexibly choose the strategy. Perhaps VFS should just accept a BoxFn, such that using gitignore is strictly consumer’s business?

lnicola commented 5 years ago

Wouldn't it be good to include the examples, tests and benchmarks, so things like go to definition and find references keep working?

matklad commented 5 years ago

@lnicola for crate.io dependencies I think that is not important

lnicola commented 5 years ago

Good point. But for the current project they are.

matklad commented 5 years ago

Something we've discussed with @Xanewok at zulip is that we can also fold parsing into the mix and have a three-state repr:

enum SourceState {
    Compressed(Vec<u8>),
    Decompressed(String),
    Parsed(TreeArc<ast::SourceFile>),
}

the repr could change dynamically (so, an interiro mutability is required) depending on access patterns and memory usage. This should also allow us to incrementally reparse files

spadaval commented 4 years ago

(Another) crazy idea: Store source code and other large (meta)data in a sqlite or similar database-in-a-file system. This would allow us to reduce memory usage while also having minimal effects on performance.

The new dependencies are not insignificant, but they would probably be acceptable. This approach also scales better for huge projects.

lnicola commented 4 years ago

@spadaval this might work at the salsa level, see https://github.com/salsa-rs/salsa/issues/10.

lnicola commented 3 years ago

I gave this a try at the VFS level, using LZ4:

Run RSS (MB) CPU time (s) Mem (MB)
before 812 20.24 764
before 812 18.69 765
before 814 20.74 764
after 841 19.76 751
after 842 21.58 751
after 813 19.54 751

Uncompressed source code is 43 MB. The tests consisted in starting Code with only RA's main.rs open. I didn't use the custom dictionary feature of the LZ4 crate, but that might help a little, too.

Overall I'm not convinced this is worth it, what do you think?

matklad commented 3 years ago

Yeah, seems like it's not worth it at this time!

Thanks for quantifing the wins here @lnicola , that's super helpful!

lnicola commented 1 year ago

@Veykril think we should revisit this? See table above.

Veykril commented 1 year ago

Ye I think this would be good to revisit (vfs takes up ~100 mb on r-a for me currently)

lnicola commented 1 year ago

Some updated baseline numbers after starting Code with main.rs:

nehalem501 commented 6 months ago

I might suggest trying with a more modern compression algorithm like zstd instead of lz4 this time.

lnicola commented 6 months ago

The issue with zstd is that it pulls in a lot of C code.

Anyway, I tried this again and the memory usage grew, so there's probably something weird going on that's not related to the compression.

davidbarsky commented 6 months ago

@lnicola do you still have a branch where you tried this approach (if not, a description is totally fine!). I wanted to try it out with zstd where, for organizational reasons, it's substantially easier for me to bundle a bunch of C code.

(if it's successful, it would likely be a private set of patches I wouldn't send as a PR for the aforementioned "way too much C code" reasons.)

lnicola commented 6 months ago

@davidbarsky yeah, I'll clean it up and rebase tomorrow, but it's pretty trivial.

I think at the time I actually did some tests against zstd (outside of RA, by compressing the files), but don't remember the results, but I think using a custom dictionary wasn't really worth it.

The other thing that's needed here is at least a one-item LRU cache, because without that we're going to keep recompressing the current file when the user is typing. I don't think we generally hit the VFS too much otherwise (except when switching branches). https://github.com/lnicola/rust-analyzer/tree/vfs-log adds some logging we can use to double-check.

lnicola commented 4 months ago

https://github.com/rust-lang/rust-analyzer/pull/16307 makes this obsolete by dropping the file contents from the VFS.

We could still compress the contents in the salsa db, but I'm not sure how to implement that without thrashing on active set of files. Can queries change the inputs?