Closed brson closed 6 years ago
I think you want @larsbergstrom?
Yes, thanks larsberg!
For what it's worth: Gecko build slaves are using sccache, which is a tool kind of like ccache, except it works for gcc/clang and MSVC, and uses a network storage (S3). It's currently written in python, but for multiple reasons, I'm planning to rewrite it in rust. Which makes me think there's maybe a base to share here.
Would having a machine-local build cache be covered by this, or should I open a new issue?
There are two issues I was curious about here:
1) Can we make a ccache/sccache-like tool work with Rust? Can third parties just do it themselves, or is there support needed in the compiler/cargo?
2) For situations like artifact builds in Firefox (https://groups.google.com/forum/#!topic/mozilla.dev.builds/jGg69m0x6Ck ) where the final binaries are retrieved from a Server.
From @ncalexan on the second issue:
@larsbergstrom this ticket is pretty broad, and the discussion of caching doesn't really say what you want to do. Artifact builds download the major compiled libraries from Mozilla's automation; the big one is libxul.so. I have only followed the rustc integration at a distance, but it's my understanding that y'all produce .o files that are linked into existing libraries (including libxul.so). If that's true, then any Rust integration into the Gecko tree should "just work" with artifact builds. (Of course, Rust developers won't be able to get the speed advantages of artifact builds, but that'll teach you to use a compiler :))
@rillian is the person I tap to keep abreast of Rust/Gecko integration progress -- perhaps he can add color here?
Right now the rust code gets linked into libxul, so @ncalexan is correct that Artifact builds should be unaffected.
Having cargo be able to query a crate-level build cache would be interesting, and an easier integration point for projects outside gecko.
@eddyb asked about machine-local caches. While that's not what we're talking about here, and it would be simple enough to teach cargo to link build results into a shared cache under .cargo like it does for sources, a ccache/distcc-oriented interface would work for both.
I had some more thoughts about this today when discussing with @wycats. I think a lot of desires could be solved with something like this:
My thinking of how we'd implement this is a relatively simple system:
.cargo/config
). This may default to the filesystem.The idea here is that a the value for a key never changes (kinda like a CAS system). That way if a cache gets concurrent pushes it can ignore all but the first. (and maybe assert they're all the same). Cargo could then support custom backends for the get/put functionality, for example to push to S3 or to push to an internal company caching server.
Some caveats:
@alexcrichton Thanks! This sounds exciting and that looks good to me at first blush.
@luser @glandium Are there cases where this key into the compiled crate might break, based on your experiences with CCACHE/SCCACHE/artifact builds?
sccache uses the output of the preprocessor + the compiler commandline as input to the hash key. I suspect the equivalent in the Rust world would be something like a hash of some IR + compiler options, and some hash of any external crates being used.
@alexcrichton, @glandium and I met yesterday to discuss this, and we came up with what seems like a workable path to using sccache for caching crate compilations that are done via cargo. We'd have cargo invoke sccache as a wrapper around rustc, just like we do for C++ compilation in Gecko. (Cargo currently doesn't support rustc wrapper binaries, we probably want to add that as a feature, but for now we can just set RUSTC=wrapper_script
.) sccache does compiler detection, so we'd teach it to detect rustc, and then generate the hash key from the following:
One big caveat that Alex mentioned is that plugins providing procedural macros can break our assumptions here. We expect that most well-behaved plugins will not be a problem, as they should be idempotent on the input, so as long as we hash the input and the plugin binary that should be sufficient. We could just declare that non-idempotent plugins will produce bad results, but we might want to do some special handling there if we intend to enable this more broadly in the future. Alex also mentioned that procedural macros that use data from external files will be an issue, those files don't currently make their way into the dependency graph. It looks like the built-in macros like include_bytes
do the right thing here, but we might need to add a way for procedural macros to feed extra files they reference back to the compiler.
This would all fit very well into the existing sccache codebase, and doesn't require any changes to rustc (modulo the note above about making procedural macros work better with external files), and only one small change to cargo (allowing a rustc wrapper), all of which makes it a very appealing approach. Alex believed that given the strong hashing that rustc uses for crate dependencies, which wind up embedded in rlibs, hashing all the inputs on the compiler commandline should provide a strong hash key.
Alex, Mike, if there's anything important that I left out please let me know!
We're extremely interested in this from the Gecko side, as we start to add more Rust code to the Firefox build, and it sounds like this would have broad benefit to all large Rust projects.
That sounds like an excellent summary to me, thanks for the writeup @luser! Things I'd add:
sccache
would work like sccache rustc
when invoked as rustc
(e.g. hard linked to different binary name)CARGO_MANIFEST_DIR
- can probably ignoreOUT_DIR
- also on this list, but in theory only affects paths included elsewhere, so as long as we track things like include!
paths directly we can probably safely ignore.Some further thoughts I've had with @brson in other discussions are that a neat thing we could do on the Rust side is to have a server populating a global crate cache by just building a bunch of crates every day. That way sccache installed would pull from the global crate cache or fill in locally if it's available, speeding up everyone's compiles! (just a side thought though)
Some further thoughts I've had with @brson in other discussions are that a neat thing we could do on the Rust side is to have a server populating a global crate cache by just building a bunch of crates every day. That way sccache installed would pull from the global crate cache or fill in locally if it's available, speeding up everyone's compiles! (just a side thought though)
Unless cargo is invoking rustc with entirely relative paths (which it doesn't appear to be, from looking at `cargo build -v, making this work will also rely on https://github.com/mozilla/sccache/issues/35.
It'd be nifty if sccache would work like sccache rustc when invoked as rustc (e.g. hard linked to different binary name)
We can implement this, but for Firefox builds we'll still want a way to pass both the path to sccache and the path to rustc, since we don't have them in $PATH
.
Y'all may want to look into Nix. They've been doing cached mostly deterministic building of artifacts and know quite a bit. They've also an IRC channel.
Any native libraries that are being linked to the final output. These are only provided as linker arguments, AIUI, so sccache would need to replicate the linker's search mechanism here.
This unsettling state of affairs, where it seems like things could easily break, has me wishing for a system where the compiler itself provides a hash key to the caching wrapper, since the compiler probably knows best what needs to go into the hash. Similarly, the compiler might subcontract to linkers etc to provide the hashes that are relevant to them.
I'm wondering if @garbas has input on this
At RelEng team we started using Nix for our collection of services. one of the features that Nix also brings to the table (apart from build reproducible builds) is a binary cache that you get for any language specific package management (pip, elm-package, npm, cargo, ...).
I'm not sure what the scope of this ticket is, but make your builds deterministic and binary cache becomes a no brainier and just a consequence of good design.
I'd love to see support for a local compiled-crate cache in ~/.cargo
, to speed up compiling many different crates with similar dependencies.
However, any kind of compiled-binary cache shared over the network should require an explicit opt-in at build time, not an opt-out. I'd still love to see it, but not as something cargo uses by default.
This unsettling state of affairs, where it seems like things could easily break, has me wishing for a system where the compiler itself provides a hash key to the caching wrapper, since the compiler probably knows best what needs to go into the hash. Similarly, the compiler might subcontract to linkers etc to provide the hashes that are relevant to them.
I agree that having to reverse-engineer the linker's behavior is not the best thing here. Aside from that everything seems very straightforward. It would certainly be nice to have cooperation with the compiler, but it's also nice to not have to make any changes to the compiler for this to work.
I'd love to see support for a local compiled-crate cache in ~/.cargo, to speed up compiling many different crates with similar dependencies.
sccache supports both a local disk cache and a network cache, so either of these should be doable. I agree that having a global shared cache is maybe not a great default, unless we provide much stronger guarantees, like signing the cache entries or having some other way of verifying them.
Y'all may want to look into Nix. They've been doing cached mostly deterministic building of artifacts and know quite a bit. They've also an IRC channel.
Nix is very neat, but I don't know that there's any real secret sauce there, just "hash all the inputs to the build" and "ensure the build is reproducible so that the same inputs produce the same output". I don't know that there's much we could actually share with them (although I would be happy to be proven wrong). It seems like you can already use cargo within nix and get some benefits from it, but we'd like to build something that's useful even if you haven't opted-in to that ecosystem.
@luser, there may not be any secret sauce in Nix, but there is a lot of pre-existing infrastructure that you can get for free by using it. It just seems to give you everything you're going for without much effort. Plus, it allows you to use the same system to manage both Rust and system dependencies, giving you the whole Nix ecosystem for free. One way or another, I feel you're asking people to buy into some system. So it might as well be the one that already exists and gives a ton of tools.
@ElvishJerricco @luser i would really like to get Nix closer to cargo, maybe the same way it was done for stack via command line option you have to opt-in --nix
. this would be a quick way how to get something with binary cache, since nix would provide it when --nix
option is used.
At RelEng we already start using Nix to build docker images (https://github.com/mozilla-releng/services). And if we will be looking into Reproducible Builds in the future of mozilla-central we will have to get our environments reproducible, and Nix might be a way start get us started this path. I summarized my thoughts about this after last Reproducible Builds summin in Berlin.
We are yet to provide a fully (build) reproducible environment for Gecko with Nix, but work already started here. Some are already using it to develop gecko, but is not yet at the stage where it is beginner friendly for everybody. In longer run you could bootstrap gecko environment using nix with ./mach bootstrap --nix
, which is similar proposal which i wrote in the first paragraph.
Woah, I wish I saw this thread earlier!
In https://github.com/haskell/cabal/issues/3882 I propose a method for integrating Nix and Cabal that can by used by any langauge-specific package manager. [@garbas this is way tighter integration than stack + nix.] It's a really simple plan really: language package manager solves version bounds and comes up with concrete plan, and then send that off to Nix. It's a lot like the division of labor between CMake and Make.
@luser
Nix is very neat, but I don't know that there's any real secret sauce there, just "hash all the inputs to the build" and "ensure the build is reproducible so that the same inputs produce the same output".
So a big extra feature is what we in Nix-land call import from derivation. This is the only trivially-distributable way I know of to soundly implement dynamic dependencies. See https://blogs.ncl.ac.uk/andreymokhov/cloud-and-dynamic-builds/ and my thread with the author in the comments for some (more theoretical) discussion of this. In practice, this would be most useful for the many things in your "One big caveat that Alex mentioned..." paragraph.
Why not following the way NixOs will probably take ? http://sourcediver.org/blog/2017/01/18/distributing-nixos-with-ipfs-part-1/
Distributing precompiled libraries for a specific platform using IPFS or some Bittorrent DHT method.
EDIT: @Havvy already talked about Nix !
As an update, initial Rust support in sccache landed about a week ago. You can try it out by building sccache master. It's currently a bit of a pain to use, you have to hard-link or copy the sccache binary to be named rustc
, then pass that as RUSTC
or put it first in your $PATH
to get cargo to use it. I have a patch to add support for a RUSTC_WRAPPER
env var to cargo so you could simply set RUSTC_WRAPPER=sccache
instead.
The RUSTC_WRAPPER
patch is in https://github.com/rust-lang/cargo/pull/3887.
This could probably be integrated with rustup - storing crates caches separately for every toolchain. @alexcrichton @brson
An advantage of storing caches per toolchain via rustup, would be trivial implementation of cache clearing on toolchain update/remove, without affecting other toolchains' caches.
For users keeping up with the trains, this would prevent perpetual cache growth.
Is distributed compilation in scope for this feature?
With the advent of sccache and RUSTC_WRAPPER
I think this is effectively fixed from what we can do on Cargo's end, so closing.
Are there any docs/writeups on how to use RUSTC_WRAPPER with sccache?
Run Cargo with the RUSTC_WRAPPER
environment variable set to the path to the sccache binary, or its name if it is in $PATH
. For example RUSTC_WRAPPER=sccache cargo build
With the advent of sccache and RUSTC_WRAPPER I think this is effectively fixed from what we can do on Cargo's end, so closing.
I think it'd be great to look into the feasibility of an actual global shared cache, but that has a lot more hard problems than solving the CI / local developer case (like trusting arbitrary binaries from a cache).
So I found this issue, and maybe it's worth opening a new one, but I honestly don't think this replaces distcc at all.
distcc is extremely useful for offloading building from one very-not-powerful computer to an actually-decently-powerful one. For example, I use distcc to defer building from an ARM mini computer to my desktop. While the use case of several worker servers sharing work is covered by this, the offloading of work from one small computer to a larger one is not. So, in that sense, this still hasn't fixed the issue of distcc for cargo IMHO.
So I found this issue, and maybe it's worth opening a new one, but I honestly don't think this replaces distcc at all.
We've just recently finished merging work to add distributed compilation support to sccache. It includes support for distributing Rust compilation which ought to solve your use case. There's a quick start guide available, and more thorough docs should be landing in the near future.
I have only two Rust projects, and yet I'm annoyed I'm constantly rebuilding same dependencies (because I have to e.g. cargo clean
for other reasons than dependencies).
I know that the solution is to use "sccache", but how come this is not the default? Providing fast builds (especially not rebuilding the dependencies constantly) should be the default configuration.
P.S. I don't think sccache works how I assumed, now that I test it. If I run cargo clean
it still keeps on building miow
and other dependency packages for nth-time when I cargo build
with the sccache set. And it also bloats the target directory with all the dependencies, which I don't want in the target dir.
Large scale build farms (like Gecko's) really need distributed caching of build artifacts ala distcc/ccache. This will eventually be a blocker for continued adoption of Rust in Gecko.
We've talked about this several times but don't have even a vague design.
cc @larsberg