Open fenollp opened 3 years ago
In addition dependencies should be built with an empty docker context.
The Dockerfile generated on the fly should first look for hashed dependencies available over the network (i.e. does the image rust/cache:HASHED
exist), falling back to the local cache.
A build ARG
can be used to switch between networked caches.
A thing I haven't mentioned yet: the hash of a dependency should be a cryptographic hash of the inputs of that dependency, à la Nix.
cross
fyi, I did some initial work for Dockerfile cross-compile support via --platform
in tonistiigi/xx#27 . There are no custom wrapper tools or prebuilt images with toolchains. Only rustup
or apk add rust
. Seems to work quite well but people more familiar with rust might want to double-check. To me, it is cleaner than cross or xargo. I might look into integrating something similar to this project in the future as rust learning exercise, but in case I don't find time for that just putting it out here for visibility.
Ah thanks for the heads up on your xx
project.
My whole point here however is to end up achieving a global build cache for Rust crates by expressing the topological tree of crate dependencies as Dockerfile stages & mount=bind. Setting DOCKER_HOST
turns said cache into a networked one.
I am not finding any emphasizing on build results reuse in xx
but it is probably implicit. How does xx
handle / plans to handle caching of intermediary build results?
Thanks
@fenollp xx
is not meant to be a replacement for this project. It is a helper for adding native cross-compile to Dockerfiles so that they work with any --platform
configuration. This project could likely borrow some ideas from it if it wants to have similar capabilities. Rust builds in Dockerfiles can't do package-based caching, if you just want faster incremental builds then cache mounts help.
I'd like to discuss the future of
cargo-wharf
and to this extent some ideas I'd like to collaborate on.Cache integration and the case for a community-backed global cache
Recent versions of
docker build
support--output=PATH
which copies files out of an image. This allows for writing the compilation results of each dependency to the filesystem of the local machine or of a CI cache.cargo
has a way of specifying where to look for build artifacts other than the sometimes-empty./target/
dir:CARGO_TARGET_DIR
.More on
CARGO_TARGET_DIR
Per https://stackoverflow.com/a/37472558/1418165 it turns out that a shared
CARGO_TARGET_DIR
(orCARGO_BUILD_TARGET_DIR
)"target"
per https://github.com/rust-lang/cargo/pull/1657#issue-36459101These would have to be part of the hashed name of each dependency being built (the dependency path or the docker tag). To solve hermeticity issues, see
cross
cross
already does a good job of building Rust projects (on various platform triplets) using docker (docker run
) and QEMU. This work should be adapted (in a way that can be most easily maintainable) to use BuildKit, its QEMU integration, its rootless capabilities and its ability to run the compute graph with maximum parallelization.Conclusion
So if
cargo-wharf
where to create hermetic BuildKit targets for each dependency, leveraging the work oncross
, I think there'd be a seamless way to integrate both local and global caches for dependencies. This global cache (basically a Docker Registry) could then be paid for by the community and profit the community.To get there I see these development steps:
cargo-wharf
, hashed and hermeticcross
's.--mount=from=HASHEDDEP,source=...,target=...
as read-onlydocker build
this Dockerfile as thecargo build
equivalent. Same forcargo test
.Note that that global docker registry
DOCKER_HOST=ssh://lotsa.oompf.machine.com cargo build
, only the final build results would then be transferred over the network.Ideas, thoughts, notes, criticism please shoot.