rust-lang / cargo

The Rust package manager
https://doc.rust-lang.org/cargo
Apache License 2.0
12.7k stars 2.41k forks source link

Per-user compiled artifact cache #5931

Open djc opened 6 years ago

djc commented 6 years ago

I was wondering if anyone has contemplated somehow sharing compiled crates. If I have a number of projects on disk that often have similar dependencies, I'm spending a lot of time recompiling the same packages. (Even correcting for features, compiler flags and compilation profiles.) Would it make sense to store symlinks in ~/.cargo or equivalent pointing to compiled artefacts?

alexcrichton commented 6 years ago

There's been musings about this historically but never any degree of serious consideration. I've always wanted to explore it though! (I think it's definitely plausible)

aidanhs commented 6 years ago

sccache is one option here - it has a local disk cache in addition to the more exotic options to store compiled artifacts in the cloud.

djc commented 6 years ago

sccache would be good for the compilation time part, but it'd be nice to also get a handle on the disk size part of it.

Eh2406 commented 5 years ago

cc https://github.com/rust-lang/cargo/issues/6229

Vlad-Shcherbina commented 5 years ago

I think you can put

[build]
target-dir = "/my/shared/target/dir"

in ~/.cargo/config.

But I have no idea if this mode is officially supported. Is it?

Eh2406 commented 5 years ago

Yes it is, as is setting it with the corresponding environment variable. However the problems with cargo never deleting the unused artifacts gets to be dramatic quickly. Hence the connection to #6229

epage commented 1 year ago

@joshtriplett and I had a brainstorming session on this at RustNL last week.

It'd be great if cargo could have a very small subset of sccache's logic: per-user caching of intermediate build artifacts. By building this into cargo, we can tie it into all that cargo knows and cane make extensions to better support it.

Risks

To mitigate problems with cache poisoning

As a contingency for if the cache is poisoned, we need a way to clear the cache (see also #3289)

To mitigate running out of disk space, we need a GC / prune (see also #6509)

Locking strategy to mitigate race conditions / locking performance

Transition plan (modeled off of sparse registry)

epage commented 1 year ago

Wonder if something like reflink would be useful

epage commented 1 year ago

See also #7150

epage commented 1 year ago

Some complications that came up when discussing this this with ehuss.

First, some background. We track rebuilds in two ways. The first is we have an external fingerprint that is a hash that we use to tell when to rebuild. The second is we have the hash of build inputs we pass to rustc with -Cmetadata that is used to keep symbols unique. We include this in the file name, so if -Cmetadata changes, then the filename changes. If the file doesn't exist, that is a sure sign it needs to be built.

Problems

I guess the first question is whether the per-user cache should be organized around fingerprint, -Cmetadata, or something else. Well, -Cmetadata isn't an option so long as it doesn't have RUSTFLAGS, which it shouldn't, so it would more be fingerprint or us adding a new hash type, one that maybe we reuse with the file names and ensure doesn't cause problems with rustc.

weihanglo commented 1 year ago

Cargo uses relative paths to workspace root for path dependencies to generate stable hashes. This causes an issue (https://github.com/rust-lang/cargo/issues/12516) when sharing target directories between package with the same name and version and relative path to workspace.

epage commented 1 year ago

For me, the biggest thing that needs to be figured out before any other progress is worth it is how to get a reasonable amount of value out of this cache.

Take my system

This is a "bottom of the stack" package. As you go up the stack, the impact of version combinations grows dramatically.

I worry a per-user cache's value will only be slightly more than making cargo clean && cargo build faster and that doesn't feel worth the complexity to me.

jplatte commented 1 year ago

How did you do that analysis? I'd be interested in running it on my own system.

Also. re caching and RUSTFLAGS, could it be an optional to simply fall back to the existing caching scheme (project-specific target dir) if RUSTFLAGS is set at all? Personally, I work on very few projects that utilize RUSTFLAGS (AFAIK.. although I also have it configured globally right now, with -C link-arg=-fuse-ld=mold, which I'd have to disable or find an alternative solution for), so everything else benefitting from a shared cache dir might already be useful.

epage commented 1 year ago

How did you do that analysis? I'd be interested in running it on my own system.

Pre-req: I keep all repos in a single folder.

$ ls */Cargo.lock | wc -l
$ rg 'name = "syn"' */Cargo.lock -l | wc -l
$ rg 'name = "syn"' */Cargo.lock -A 1 | rg version | rg -o '".*"' | sort -u | wc -l

(and yes, there are likely rg features I can use to code-golf this)

jplatte commented 1 year ago

Thanks! I keep my projects in two dirs (approximated active and inactive projects), but running this separately on both I get the following. Also included futures-util as another commonly-used crate, one that does not get released as often.

stat active inactive
number of crates / workspaces 34 91
workspaces pulling in syn 29 72
different versions of syn 15 33
workspaces pulling in futures-util 20 44
different versions of futures-util 3 14
djc commented 1 year ago

Also syn is part of a set of crates that gets a lot of little bumps. This is common for dtolnay crates, but not so much for a whole host of other crates -- so I'm not sure this particular test is very representative. (Note that I'm definitely not disagreeing that the utility of a per-user compiled artifact cache might not be as great as hoped.)

FWIW, given what I see Rust-Analyzer doing in a large workspace at work (some 670 crates are involved) it seems to be doing a lot of recompilation even with only weekly updates to the dependencies so even within a single workspace there might be some wins?

lu-zero commented 1 year ago

Somebody with a deduplicating file system could share their statistics for a theoretical upper bound?

epage commented 1 year ago

@djc yes, syn bumps a lot (I wish more did that personally). I chose it partly because of the recent proc-macro conversions and the fact that by being a bottom-of-the-stack crate, it would cause cache missing for everything above it. This highlights the problem of dependencies causing cache misses.

ia0 commented 1 year ago

I might be missing something obvious, but what should be the behavior of cargo clean? Should it remove all artifacts, only those that would be otherwise used by the current crate, or do nothing at all?

I'm asking because if cargo clean in one crate would increase compile time in an unrelated crate by removing a compiled artifact, then this looks like a negative side-effect (compared to the positive side-effect of compiling one crate reducing the compile time of an unrelated crate because they happen to share an artifact).

epage commented 1 year ago

My expectation is that cargo clean would clean your target/ directory and not touch the cache. We are looking at adding a GC for global resources which would then also apply to the cache. Depending on feedback, maybe we'd explore other aspects.

ia0 commented 1 year ago

Thanks! This looks reasonable to me. The only possible issue I see is when cache corruption occurs (either because of tooling like rust-analyzer or ephemeral hardware issue). In that case, because we lost cache isolation by design, we would need to wipe everything. But as far as I'm concern, that's acceptable.

lu-zero commented 1 year ago

IMHO it would be nice if cargo could offload completely the caching management and just offer an interface to lookup cacheable items and provide them on misses and leave all the rest to other tools (e.g. sccache), this way cargo doesn't need to have more logic inside and people using org-wide caches can just write/use a smaller integration.

epage commented 1 year ago

@lu-zero what do you see as "the rest" for being left to other tools?

I see a lot of the complexity involved here being determining what should be cached. While some level of complexity will be involved with GC, we'll be needing that anyways for things like "cargo script".

A downside to offloading completely is that it makes it so other users don't get caching out-of-the-box which has a big impact on how many would use caching.

Its also easier for us to iterate on the design when its all internal vs also working on a unstable interface to then stabilize. I see this as a potential stepping stone for additional caching strategies.

lu-zero commented 1 year ago

Right now ccache and such tools tend to interpose between the actual compiler and their caching logic is tied to to capturing the context the best they could.

Then based on that information they can lookup cached items and freshen/retrieve them with arbitrary complex storage logic/policies.

Cargo has the full picture already so finding a good way to deliver it to, e.g., sccache would allow it to cache more and potentially better and we do not have to reinvent the wheel regarding to how to manage the cache itself, distribute it across and so on and so forth.

I think that focusing on building that interface might improve the current situation for sccache users and if somebody is willing to add a tiny-cache by having the simplest cache we can later integrate in cargo instead of suggesting to install sccache.

epage commented 1 year ago

We talked about this more in office hours.

While local development won't initially benefit from this, this could help CI a lot because the cache size would be better managed without external tools needing to shrink things.

Then when we get to plugin support, this would benefit CI even further because you could then have fine grained caching across CI jobs.

Projects could then offer read-only access to this cache so local development could be sped up.

epage commented 1 year ago

A potential path for this

Initial: For packages that are (1) immutable (non-local) and (2) deterministic (no build.rs, no proc-macros), (3) no rustflags are used, build these in a shared target directory and have dependents lock and point to both shared directories

From there, we could do (note: this is a tree of tasks)

jplatte commented 1 year ago

While local development won't initially benefit from this, this could help CI a lot because the cache size would be better managed without external tools needing to shrink things.

How would CI benefit? Is the idea that only those packages that are deterministic would be stored in CI cache, so it is smaller than current solutions, while guaranteeing that it's reusable as long as dependencies don't change?

Also local development won't benefit? At all? I think there should at least be a little bit of deduplication between projects, even if it's just low-level-ish crates?

Anyways, the plan above seems like a great way to make progress. Start small and build things up gradually. Also further incentivizes making libs compile-time deterministic.

epage commented 1 year ago

How would CI benefit? Is the idea that only those packages that are deterministic would be stored in CI cache, so it is smaller than current solutions, while guaranteeing that it's reusable as long as dependencies don't change?

I might have jumped ahead on that because keeping the cache size smaller for non-local builds (which would make CI cache upload/download faster) would require GC which would come later.

Also local development won't benefit? At all? I think there should at least be a little bit of deduplication between projects, even if it's just low-level-ish crates?

To be more precise "the amount of benefit for local development will be small enough that it would not justify this feature on its own". For any no-deps packages which have few releases, you'd get sharing but then you won't be updating them often anyways. I also worry about having that stable or being stuck that way too long because it discourages two bad practices (1) people over-optimizing for no-deps and (2) people doing fewer, larger releases

lu-zero commented 1 year ago

@epage where you'd put this caching layer?

Right now the caching with sccache happens at compiler invocation, with the problems and restrictions that should be well known now.

If the caching layer has its say at dependency resolution you could give priority to whatever is cached as long it fits the version constraints and potentially have more hits, if it happens post-resolution then it would be still an improvement over the status-quo.

epage commented 1 year ago

I expect the cache to mostly work by

That said, there is an effort to generalize the yanked/offline/error status for potential versions to unblock experiments in allowing alternative sources affect version selection. That could then be used also as part of caching though it'd likely have limits because it wouldn't easily be able to tell if the cached item has the right fingerprint.

epage commented 1 year ago

That said, there is an effort to generalize the yanked/offline/error status for potential versions to unblock experiments in allowing alternative sources affect version selection. That could then be used also as part of caching though it'd likely have limits because it wouldn't easily be able to tell if the cached item has the right fingerprint.

I should clarify though that this only has the choice of making versions available or not. Its a very crude tool that would mean your lockfiles (whether checked in or not) would be changed to conform to this and might not even be able to be generated.

lu-zero commented 1 year ago

Do we have somewhere all the sources of change that would invalidate the cache (and thus participate to the item fingerprint) ? I assume it should be the same as what's used by cargo to decide the freshness.

epage commented 1 year ago

We'd be using cargo's fingerprint code to determine this

lu-zero commented 1 year ago

Probably relaxing at least the Target src path relative to ws entry and adding few more to take care of build.rs (env vars and external deps/autocfg checks).

When I benchmark I tend to keep different builds around using different target-dir, having the cache keeps the dependencies shared and it would make that task fairly quicker.

codyps commented 11 months ago

Note that setting target-dir or CARGO_TARGET_DIR (as suggested in https://github.com/rust-lang/cargo/issues/5931#issuecomment-441323704) right now to a fixed location to have a per-user target-dir will break things badly in some cases due to https://github.com/rust-lang/cargo/issues/12516 (edit: apparently already linked above in the comments)

idelvall commented 10 months ago
  • When expecting to compile a package

    • If in cache, copy it into CARGO_TARGET_DIR
    • If not in cache, compile it and write it to cache

@epage could you avoid the copy to CARGO_TARGET_DIR and read the binaries directly from the new cache?

That would help us having three mount caches at earthly/lib/rust without duplicated entries: One for CARGO_HOME, other for CARGO_TARGET_DIR and other for the new cache.

epage commented 10 months ago

Yes, we could have locks on a per-cached item basis and read directly from it. Whether we do depends on how much we trust the end-to-end process.

RobJellinghaus commented 8 months ago

Hi folks, chiming in here to merge two streams: some of us at Microsoft did a hackathon project to prototype a per-user Cargo cache, late last September.

Here's the Zulip chat: https://rust-lang.zulipchat.com/#narrow/stream/246057-t-cargo/topic/Per-user.20build.20caches Our HackMD with status as of our last discussion of this with the Cargo team: https://hackmd.io/R64ykWblRr-y9-jeWNLGtQ?view Comparison of our branch's changes: https://github.com/rust-lang/cargo/compare/master...arlosi:cargo:hackathon?expand=1

Our initial testing of this generally showed surprisingly small speedup, even when things were entirely cached. It seems that rustc is just really damn fast at crate compilation :-O And that (as we know) the long pole is always the final LLVM binary build.

This change took the approach of creating a user-shared cache using the cacache crate, which worked very well and allowed us to get something functioning quickly, but which was generally considered by the Cargo team to perpetuate the anti-pattern of having a not very human-readable filesystem layout. @arlosi had a background goal of creating a more human-manageable filesystem cache provider implementation, but I know other priorities have supervened since then.

I do think this change's approach of having a very narrow cache interface is a good design direction that was reasonably proven by this experiment.

Happy to discuss our approach on any level, hope it is useful to people wanting to move this further forwards.

mydoghasfleas commented 6 months ago

Our initial testing of this generally showed surprisingly small speedup, even when things were entirely cached. It seems that rustc is just really damn fast at crate compilation :-O And that (as we know) the long pole is always the final LLVM binary build.

But the problem being solved here is not only the speed, but the amount of storage space being consumed. When you have dozens of projects each compiling the same crates and each easily taking up 2GB, your drive starts filling up very quickly!

ssokolow commented 6 months ago

Our initial testing of this generally showed surprisingly small speedup, even when things were entirely cached. It seems that rustc is just really damn fast at crate compilation :-O And that (as we know) the long pole is always the final LLVM binary build.

But the problem being solved here is not only the speed, but the amount of storage space being consumed. When you have dozens of projects each compiling the same crates and each easily taking up 2GB, your drive starts filling up very quickly!

Exactly.

I've now got a Ryzen 5 7600. Combined with mold, cargo clean; cargo build --release takes almost no time... but I've only got so much disk space and, even without deleting and re-building things to limit space consumption, I'd still rather avoid unnecessary write cycles on my SSD.

mydoghasfleas commented 6 months ago

Our initial testing of this generally showed surprisingly small speedup, even when things were entirely cached. It seems that rustc is just really damn fast at crate compilation :-O And that (as we know) the long pole is always the final LLVM binary build.

But the problem being solved here is not only the speed, but the amount of storage space being consumed. When you have dozens of projects each compiling the same crates and each easily taking up 2GB, your drive starts filling up very quickly!

Exactly.

I've now got a Ryzen 5 7600. Combined with mold, cargo clean; cargo build --release takes almost no time... but I've only got so much disk space and, even without deleting and re-building things to limit space consumption, I'd still rather avoid unnecessary write cycles on my SSD.

The write cycles issue is very pertinent!

soloturn commented 2 months ago

am trying sccache, and it works well when compiling the same component. compiling another component all crates are cache miss. the compiler flags are the same all the time, the components closely related, as part of COSMIC desktop. why is this so, resp in which part of cargo this could be fixed? @epage ?

CosmicHorrorDev commented 2 months ago

If the similar components use different features sets in common dependencies then it can still wind up recompiling a lot of crates

How is this relevant to the issue?

weihanglo commented 2 months ago

Will mark https://github.com/rust-lang/cargo/issues/5931#issuecomment-2253326282 as off-topic for now, and please use other channels to discuss quesions. Let's leave this thread for the design of the feature itself. See also https://github.com/rust-lang/cargo/issues/14278#issuecomment-2253343450.

jaskij commented 2 months ago

Came across this, reading the design document , and decided to share my thoughts here. Hopefully this will be good input.

In CI, users generally have to declare what directory is should be cached between jobs. This directory will be compressed and uploaded at the end of the job. If the next job's cache key matches, the tarball will be downloaded and decompressed. If too much is cached, the time for managing the cache can dwarf the benefits of the cache. Some third-party projects exist to help manage cache size.

This makes several assumption about CI behavior, probably based on how GH Actions behaves. Note that other runners will behave differently, for example by default GitLab does not upload the cache anywhere. I'm not even sure if the cache is compressed. This shouldn't be an issue, but it is a set of bad assumptions in the design doc and could theoretically could lead to bad design.

I would also like to add that CI interactions should have supporting multiple CI runners from the get go. GitHub is, for better or worse, dominant, especially in mindshare, and the Rust project shouldn't be furthering it's monopoly.


Hashes and fingerprinting

Stale caches are a major pain, to the point that I have learned to recognize the signs working with at least two build systems. Currently, Cargo does not even use the whole hash, only 64 bits (sixteen digits) of it. I'm worried that with user-wide caches, collisions may happen. To that end, I'd prefer there was a plaintext copy of all the data going into the cache, to be verified that it is indeed the correct cache. Or at least use the full length of the hash.


Garbage collection

I've seen someone mention clearing old stuff by access time. This would work, in theory, but is also something to be careful around. For example, a lot of desktop systems use relatime, and some people set noatime for performance reasons. Are we even sure that the data is always available?

Personally, I would love to see something more advanced, with data tracking.


Setting the cache directory

It is a feature that would greatly improve flexibility, while being fairly simple. I know people who would put the build cache in a ramdisk. I can envision a situation where the cache is put on a network share, to provide rudimentary sharing between people. Just allowing the user to configure the cache directory would make it much easier to set up. The default should be under $CARGO_HOME, but it should be possible to change independently. I know all the stuff I described above should be possible under Linux using just filesystem manipulation, but this variable makes it much more user friendly.

epage commented 2 months ago

This makes several assumption about CI behavior, probably based on how GH Actions behaves. Note that other runners will behave differently, for example by default GitLab does not upload the cache anywhere. I'm not even sure if the cache is compressed. This shouldn't be an issue, but it is a set of bad assumptions in the design doc and could theoretically could lead to bad design.

I would also like to add that CI interactions should have supporting multiple CI runners from the get go. GitHub is, for better or worse, dominant, especially in mindshare, and the Rust project shouldn't be furthering it's monopoly.

This is mentioning a use case. Nothing in the design is "github specific"

I've seen someone mention clearing old stuff by access time. This would work, in theory, but is also something to be careful around. For example, a lot of desktop systems use relatime, and some people set noatime for performance reasons. Are we even sure that the data is always available?

We are doing our own access time tracking in an sqlite database, see https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#gc

CosmicHorrorDev commented 2 months ago

Currently, Cargo does not even use the whole hash, only 64 bits (sixteen digits) of it. I'm worried that with user-wide caches, collisions may happen

Looks like with 64 bits the probability of getting a collision starts becoming more of a possibility when you start getting close to a billion entries

From: https://en.wikipedia.org/wiki/Birthday_problem#Probability_table