Closed enobayram closed 9 months ago
Thank you for the review @emilypi! You have a great point about the documentation. We're planning to write some comprehensive documentation about all of our Nix infra soon and that will include all these workflows. I was planning to add a link to that from the README, so that we don't clutter it too much and more importantly, we can keep it up to date without spamming each one of our Haskell repos with README PRs.
I'm skipping the PR checklist since this is a Nix-only infrastructure change.
PR checklist:
* [ ] Test coverage for the proposed changes * [ ] PR description contains example output from repl interaction or a snippet from unit test output * [ ] Documentation has been updated if new natives or FV properties have been added. To generate new documentation, issue `cabal run tests`. If they pass locally, docs are generated. * [ ] Any changes that could be relevant to users [have been recorded in the changelog](https://github.com/kadena-io/pact/blob/master/CHANGELOG.md) * [ ] In case of changes to the Pact trace output (`pact -t`), make sure [pact-lsp](https://github.com/kadena-io/pact-lsp) is in sync. Additionally, please justify why you should or should not do the following: * [ ] Confirm replay/back compat * [ ] Benchmark regressions * [ ] (For Kadena engineers) Run integration-tests against a Chainweb built with this version of PactThis PR does the following:
Unify the GHC derivation used across Kadena projects
Instead of depending on
nixpkgs
andhaskellNix
directly, this flake now depends on our newhs-nix-infra
flake and uses thenixpkgs
andhaskellNix
revisions provided by it. The hash of thenixpkgs
andhaskellNix
flakes used for defining thehaskell.nix
project
determines the hash of the GHC package that gets used to compile the Haskell modules.When multiple projects depend on
nixpkgs
andhaskellNix
independently, it's very hard (and not really well supported by nix CLI) to make sure that they don't deviate from each others'nixpkgs
andhaskellNix
pins arbitrarily. I.e. updating two projects'flake.lock
files at slightly different times is likely to cause one of the pins to be on a different revision, even though the difference doesn't matter functionally.These unnecessarily different GHC packages put a lot of pressure on our CI infrastructure, taking hours to build functionally equivalent GHC packages and bloating the cache (with binaries from all the architectures we build and cache for). That also bloats the
/nix/store
of anypact
user that wants tonix build
an uncachedpact
version.The new workflow for updating Haskell-Nix toolchain
After this PR, the new workflow for managing our Haskell dependencies used by Nix will involve the following steps:
hs-nix-infra
dependency of this flake to the latest version. If that's not enough, we need to open a PR tohs-nix-infra
to bump its proper input:nix flake lock --update-input hackage
and get a newer hackage snapshot without needing to introduce a new GHC derivation.nixpkgs
version or if we need to bump ourhaskellNix
pin for any reason we need to bumpnixpkgs
andhaskellNix
. The PR would preferably bump both of them to the latest version.Hopefully, this new workflow will reduce the number of
nixpkgs
+haskellNix
versions we depend on across our Haskell projects.Add a
recursive
alternative to thedefault
packageAs part of the CI automation for this repo, we're building and caching the Nix binaries for
pact
, which makes it convenient for any user tonix build
pact from any commit/branch since all the dependencies will come from our binary cache. However, even without building anything locally, evaluating thedefault
package of this flake takes a significant amount of time and involves downloading ~2 GB of Nix dependencies. This is due to the complexity of whathaskellNix
does for us at Nix evaluation time.This PR introduces a
recursive
package to this flake's output, which usesrecursive-nix
to push the Nix evaluation of thedefault
package into the build of a derivation. This means, any user that tries tonix build .#recursive
will fetch thepact
binary from our cache without having to perform any complex Nix evaluation locally or downloading the Nix dependencies of any such evaluation as long as therecursive
derivation they're building is already in our binary cache. If not, therecursive-nix
derivation will be built locally (in which case make sure your local Nix setup hasrecursive-nix
enabled), which is essentially as much work as buildingdefault
itself. This might still be worthwhile however, since subsequent builds of the samerecursive
derivation will complete immediately, without having to evaluate thedefault
derivation again.