NixOS / nixpkgs

Nix Packages collection & NixOS
MIT License
17.16k stars 13.43k forks source link

Lockfiles bloat the Nixpkgs tarball #327064

Open Atemu opened 1 month ago

Atemu commented 1 month ago

Introduction

The size of the Nixpkgs tarball places burden onto internet connections and storage systems of every user. We should therefore strive to keep it small. Over the past years that I've been contributing, it has more than doubled in size.

In https://github.com/NixOS/nixpkgs/issues/327063 link I discovered quite negative effects of Cargo.lock files in the Nixpkgs tree with just 300 packages bloating the compressed Nixpkgs tarball by ~6MiB.

Here I'd like to document the status quo of sizes of lockfiles found in Nixpkgs and other automatically generated files of significant size.

Methodology

Numbers for the lockfiles and patches are (total bytes) or (total bytes / number of files = average per file)

Notable non-generated files

For comparison and out of interest I also recorded the compressed sizes of notable files that were made by hand:

Analysis

Lockfiles Contribute greatly to nixpkgs compressed tarball size. In total, you can attribute 8793206 Bytes ~= 8.4MiB out of the ~41MiB to lockfiles used in individual packages (~20%). The biggest offender by far are rust packages' Cargo.locks which are analysed in deeper detail in https://github.com/NixOS/nixpkgs/issues/327063.

The worst offenders in terms of Bytes per package are packages which lock their yarn dependencies at ~130KiB/package. Though these are fortunately rare but still add up to ~600KiB. The next worst appears to be bazel_7 which single-handedly requires ~100KiB of compressed data.
More notably bloated packages are those which have a package-lock.json at ~50KiB/package and electron's two info.jsons combining to ~50KiB.

Patches also present significant burden for compressed tarball size. Individually, they're usually quite small but they're very common, adding up to 2.6MiB.

All automatically generated files discovered here (package lockfiles + set lock files) sum up to 19558712 Bytes ~= 18.6 MiB (compressed) which is about half the size of the Nixpkgs tarball.

Discussion

Solutions

There are a few measures that could be taken to reduce file size of generated files:

Summarise hashes (i.e. vendorHash)

Rather than hashing a bunch of objects individually, hash a reproducible record of all objects. This is already the status quo for i.e. buildGoModule.

Record less info

Some info is not strictly necessary to record for the lock files to function. For each elisp package for instance, at least two commit ids and two hashes are recorded. Commit IDs could probably be dropped entirely here which would reduce the compressed file size by 1/3.

Fetch files rather than vendoring them

Often times, files required for some derivation are available from an online source. Fetching the file rather than vendoring it into the nixpkgs tree reduces the space required to a few dozen Bytes (~32 Bytes for the hash and a similar amount for the URL).
This is especially relevant for patches as those are frequently available elsewhere. Use pkgs.fetchpatch2 in such cases.

Lock an entire package set

Lockfiles usually represent a set of desired transitive dependency versions that some language-specific external SAT solver spat out. These are frequently duplicated because many separate packages use the same libraries but are often not exact duplicates due to differences in upstream-defined dependency constraints.

Instead, it is possible to record one large snapshot of the latest desirable versions of all packages in existence in some ecosystem and have dependent packages use the "one true version" instead of their externally locked versions.

It also provides efficiency gains as dependencies are only built once and brings us closer to what the purpose of a software distribution has traditionally been: Integrate one set of packages.

This approach is used quite successfully by i.e. the haskellPackages, measuring at just 133 Bytes per package.

This is not feasible for all ecosystems however as just the names of all 3330720 npm packages (no hashes) is ~20MiB compressed and the hashes would be at least another 100MiB. Though perhaps a subset approach could be used; only accepting packages into the auto-generated set that are depended upon at least once in Nixpkgs.

Future work

Amendments

Another solution: External lockfile repo

This is another solution I came up with after publishing and being exposed to some of the reasons why lockfiles are vendored. It often happens because upstream provides no lockfiles themselves but one is necessary for the software to build reproducibly which in our case often times means to build at all.

A lockfile must:

Vendoring lockfiles into the Nixpkgs tree achieves all of these but it's not the only way to achieve that.

For such cases, it would alternatively be possible to store these 3rd-party generated lockfiles in a separate repository and merely fetch them from Nixpkgs. You'd fetch them individually, not as a whole, so the issue of size only affects build time closures which would have been affected either way. (The current issue of lockfiles is that they bloat Nixpkgs regardless of whether they are useful to the user or not.)

This solution would work in cases where lockfiles are only required as derivation inputs (not eval inputs) which I believe to cover most usages of vendored lockfiles in Nixpkgs.

This could even become a cross-distro effort as we surely are not the only distro which requires pre-made lockfiles in its packaging.

nixos-discourse commented 1 month ago

This issue has been mentioned on NixOS Discourse. There might be relevant details there:

https://discourse.nixos.org/t/cargo-lock-considered-harmful/49047/2

Atemu commented 1 month ago

I have amended the OP with another possible solution.

Frontear commented 1 month ago

I think for externalizing lockfiles itd be a good idea to actually determine which other distros do similar things to nixpkgs (vendor their own lockfiles for reproducibility), before commiting to such an idea.

Main reason I say this is I love the idea, but I think it could easily become management nightmare when things are externalized this way, and would really only be worth the effort if and only if its actually maintained by a larger team outside of Nix, such as any of the aforementioned distros.

chayleaf commented 1 month ago

Lockfiles are sadly a necessity whenever dependencies aren't pinned (and even then parsing lockfiles can be better than a FOD alternative).

IPFS for the external lockfile repo seems like it'd be a good fit? Just pin the files after merging the PRs. Of course, hosting them normally is an option as well, but all potential nixpkgs contributors will need upload access for WIP PRs.

The problem with external lockfile repo is that we'd have to completely ditch lockfile parsing (as it would require IFD) and switch to FODs, which may force us to rewrite some Nix code (currently, Gradle support does that, so it would be affected) and maintain more hashes. It still seems like the better option out of the two though.

Atemu commented 1 month ago

IPFS for the external lockfile repo seems like it'd be a good fit?

Interesting thought but the problem with IPFS remains that we need someone to pin the files or they will inevitably be lost.

Of course, hosting them normally is an option as well, but all potential nixpkgs contributors will need upload access for WIP PRs.

Anyone can create a PR. Ideally though, we wouldn't even let users upload lockfiles and rather have them be generated by some trusted infrastructure with users merely providing upstream versions they need to have a lockfile for. Remember, lockfiles are security-critical.
As for who should have access, while the process has been lost in the current turmoil, we can simply use the same set of "trusted users" that we have for Nixpkgs merge access and that'd be fine once we have recovered as a community. We'd only need to do basic QA on code correctness, whether something actually needs to be added and prevent spam but the code users would provide should be very very basic and simple in that repo.

The problem with external lockfile repo is that we'd have to completely ditch lockfile parsing

Given the performance issues of Cargo.lock parsing, my first impression of that would be that it's a good thing.

and switch to FODs, which may force us to rewrite some Nix code (currently, Gradle support does that, so it would be affected) and maintain more hashes.

Note that the need for this to happen exists on the time scale of months\~years, not days\~weeks.

Also, not all lockfiles must necessarily go but there must be some sort of limit how much of our "data budget" we use on them.

ehmry commented 1 month ago

I migrated Nim to to lockfiles and it has fixed a lot of problems but the lockfiles are only getting bigger. I'm in favor of deduplicating the contents of the lockfiles in centralized place but I think it would take special tooling that would be somewhat consistent across languages.

If we can make it clear that lockfiles and "supply-chain" security are one in the same then maybe we can get funding for a solution, but now I see that the NGI budget is getting cut.

Atemu commented 1 month ago

Perhaps you could get your idea funded if you slapped "AI" onto it. (/s)

nixos-discourse commented 1 month ago

This issue has been mentioned on NixOS Discourse. There might be relevant details there:

https://discourse.nixos.org/t/every-new-release-of-the-nixos-unstable-channel-leads-to-a-download-of-around-42mb/49747/2

MagicRB commented 4 weeks ago

Just throwing an idea here, what if we allowed "import from builtin" that would allow us to store lockfiles in a different repo, fetch them lazily and still use them at eval time. It would still slow down eval, but not nearly as much as arbitrary IFD.

adisbladis commented 3 weeks ago

Summarise hashes (i.e. vendorHash)

I'd like to point out that this is a space-time trade-off. Large FODs are more efficient from the nixpkgs point of view, but bloats binary caches.

This is also a negative for security. We have no insight into what a single hash represents in terms of dependency graph.

Atemu commented 3 weeks ago

I'd like to point out that this is a space-time trade-off. Large FODs are more efficient from the nixpkgs point of view, but bloats binary caches.

That's a good point. I'd say that makes it a space-space trade-off though: Space in the tarball vs. space in the binary cache.

I consider space in the tarball to be a lot more precious as it affects each and every user because of the tarball's status as the source of all truth. The tarball size is also only one order of magnitude greater than the size of all lockfiles, making lockfiles a significant contributor to bloat.
Meanwhile the binary cache size only affects one entity, will always be gigantic, and is 5-6 orders of magnitude greater than all rust packages' vendor tarballs combined. Additionally, it could conceivably be deduplicated in the future in which case I'd expect the size of all vendor tarballs to deduplicate down to what they would require if represented by small FODs.

This is also a negative for security. We have no insight into what a single hash represents in terms of dependency graph.

You don't have such insights at eval time but, while convenient, that not a necessity. You could just take a look at the dependency declaration file aswell as the vendor tarball to figure out the "full" dependency graph.
Given that there will at least be some usages of "big FOD"s, tooling would have to be able to deal with that anyhow.

adisbladis commented 3 weeks ago

Meanwhile the binary cache size only affects one entity

This is not true. Binary cache size growth is a problem that cost some users dearly. There are plenty of places in the world (I've lived in some) where unmetered internet connections are impossible to get and bandwidth is expensive.

You don't have such insights at eval time but, while convenient, that not a necessity.

It is a necessity to statically reason about the dependency graph. Sure, you can write tooling that inspects derivation outputs, but that's another level of tooling complexity, and it makes it very expensive to scan a package tree.

Additionally I've never seen a convincing overrides story for any FOD packager.

I feel like we are sacrificing way too much about what makes Nix good with these hacks.

Atemu commented 3 weeks ago

This is not true. Binary cache size growth is a problem that cost some users dearly.

Sure but, as I mentioned previously, "big" vendor FODs just simply aren't a great contributor here. It's not uncommon for output paths to be a few orders of magnitude larger than "big" vendor FODs and those change on every rebuild (x4 for all our platforms) while FODs only change on updates and usually are the same on any platform.

As also mentioned, optimisations for the binary cache that IMV are unavoidable going forward such as deduplication will reduce the difference between "big" FODs and lots of tiny FODs to almost nothing.

It's not a significant contributor to unsustainable growth currently and will likely even be less significant going forward; at the worst slightly less efficient than the alternative. I don't see a significant point to be had w.r.t. binary cache size.

There are plenty of places in the world (I've lived in some) where unmetered internet connections are impossible to get and bandwidth is expensive.

The "cost" of big FODs only hits you when you're building stuff yourself and in that case you'd have to compare the 15-30MiB to the rest of the inputDerivation which, for a typical rust package such as fd, is >1.6GiB. Using a "big" FOD or not would be as significant as a rounding error here.

It is a necessity to statically reason about the dependency graph.

We all use Nix for this reason; I can feel you. I'd much prefer if we had a reasonably manageable package set a la haskellPackages instead of a separate subset package set for each drv which the current lockfiles represent.

That'd allow for static reasoning aswell as sustainable tarball size & eval time growth but that's not the reality we live in: We have to choose one.

Given that the use-cases for reasoning about the entire source dependency graph (remember: this is source code, not build artifacts) are rather fringe and could be done less elegantly through other methods, I see the trade-off in favour abstaining from lockfiles.

Additionally I've never seen a convincing overrides story for any FOD packager.

At a theoretical level, I don't see how it'd be any different to a lockfile packager. You'd pass a new/updated/different lockfile in either case but you'd have to update the vendor hash with a FOD packager which is a slight overhead and a little inefficiency but not unreasonably so.

I feel like we are sacrificing way too much about what makes Nix good with these hacks.

I feel that both hacks sacrifice what makes Nix and Nixpkgs good; neither is ideal.

The best solution is and always will be to do our job as a distro and define one package set for all dependant packages to use, making any lockfile irrelevant. That's really hard work of course though.

emilazy commented 2 weeks ago

Linking https://github.com/NixOS/nixpkgs/issues/333702 here, which is Rust‐specific but that i hope can point to a better approach for language ecosystems in general.