Open luser opened 3 years ago
I believe this would also be useful for people using sccache
in distributed compilation mode, as they could have an exaggerated version of this problem similar to what's described in that message about Chromium, with more build capacity for compiling than linking.
I have no idea if this would be practical, but could cargo automatically monitor memory usage to adjust how many concurrent threads to use?
I've managed to work around this by enabling swap. Linking time did not suffer visibly. On Ubuntu, I followed this guide.
Adding --link-jobs
option to specify number of jobs for linking. The option would default to number of parallel jobs.
Here what would help look like (-j
option is displayed for comparison):
-j, --jobs <N> Number of parallel jobs, defaults to # of CPUs
--link-jobs <N> Number of parallel jobs for linking, defaults to # of parallel jobs
@rustbot claim
@weihanglo see also #7480
FWIW, Cabel community had a discussion a while back: https://github.com/haskell/cabal/issues/1529
Potentially the unstable rustc flag -Zno-link
can separate linking phase from others (see https://github.com/rust-lang/cargo/issues/9019), and then Cargo can control the parallelism of linker invocations. Somebody needs to take a look at the status of -Zno-link
/-Zlink-only
in rustc (and that is very likely me).
As this is focusing on the problem of OOM, I'm going to close in favor of #12912 so we keep the conversations in one place.
FWIW, setting split-debuginfo = "packed"
or "unpacked"
on profile
should reduce the memory usage of linker. My experiment results in half of the memory usage per invocation.
Something we might want to keep an eye on in rustc: https://github.com/rust-lang/rust/issues/48762
As this is focusing on the problem of OOM, I'm going to close in favor of #12912 so we keep the conversations in one place.
I suppose, although this is a very specific problem and I'm doubtful that the generic mechanisms being discussed in that issue will really help address it.
Thanks. Reopened as it might need both #9019 and #12912, and maybe other upstream works from rust to make it happen.
FWIW, there is a --no-keep-memory
flag for GNU linker. Haven't tried it but might help before we make some progress on this.
https://github.com/rust-lang/rust/pull/117962 has made into nightly. It could alleviate the pain of linker OOM to some extent.
FWIW, there is a
--no-keep-memory
flag for GNU linker. Haven't tried it but might help before we make some progress on this.
I suspect this will make performance much worse in the average case, unfortunately.
i think this can be closed, as it is the linkers business to not run out of memory. e.g. mold does this, see https://github.com/rui314/mold/issues/1319 via its MOLD_JOBS
environment variable. to avoid cargo does everything and nothing well ...
For something like that, jobserver support in a linker would be a big help so we can coordinate across rustc and the linker for how many threads are available to use.
That also only focuses on number of parallel threads and not actual memory consumption. Like with threads, likely any solution coordination will be needed between links and rustc / cargo
In CI at my work, we ran into a situation where rustc would get OOM-killed while linking example binaries:
We were able to mitigate this by using a builder with more available memory, but it's unfortunate. We could dial down the parallelism of the whole build by explicitly passing
-jN
, but that would make the non-linking parts of the build slower by leaving CPU cores idle.It would be ideal if we could explicitly ask cargo to lower the number of parallel linker invocations it will spawn. Compile steps are generally CPU-intensive, but linking is usually much more memory-intensive. In the extreme case, for large projects like Firefox and Chromium where the vast majority of code gets linked into a single binary, that link step far outweighs any other part of the build in terms of memory usage.
In terms of prior art, ninja has a concept of "pools" that allow expressing this sort of restriction in a more generic way:
The Ninja feature was originally motivated by Chromium builds switching to Ninja and wanting to support distributed builds, in which there might be capacity to spawn way more compile jobs in parallel since they can be run on distributed build nodes, but link jobs, needing to be run on the local machine, would want a lower limit.
If this were implemented, one could imagine a further step whereby cargo could estimate how heavy individual linker invocations are by the number of crates they link, and attempt to set a reasonable default value based on that and the amount of available system memory.