DeterminateSystems / magic-nix-cache-action

Save 30-50%+ of CI time without any effort or cost. Use Magic Nix Cache, a totally free and zero-configuration binary cache for Nix on GitHub Actions.
MIT License
389 stars 15 forks source link

Cache Action silently errors #86

Closed dustyhorizon closed 1 month ago

dustyhorizon commented 1 month ago

There seem to be an error upstream that is causing the cache to silently fail to start

I have the following step in my CI workflow which worked quite well until recently

...
- name: Setup Nix Cache
        uses: DeterminateSystems/magic-nix-cache-action@v8
        with:
          listen: 0.0.0.0:37515
          use-flakehub: false
          diagnostic-endpoint: ""
...

And the following is the resultant output when this step is run

Run DeterminateSystems/magic-nix-cache-action@v8
  with:
    listen: 0.0.0.0:37515
    use-flakehub: false
    use-gha-cache: true
    upstream-cache: https://cache.nixos.org
    flakehub-cache-server: https://cache.flakehub.com
    flakehub-api-server: https://api.flakehub.com
    startup-notification-port: 41239
    diff-store: false
    _internal-strict-mode: false
  env:
    DETERMINATE_NIX_KVM: 1

Downloading magic-nix-cache for X64-Linux
  Fetching from https://fiids.install.determinate.systems/magic-nix-cache-closure/stable/X64-Linux
  Cache Size: ~23 MB (24214094 B)
  /usr/bin/tar -xf /home/runner/work/_temp/ca79ce8b-9615-49cf-b7f2-ce7000966a57/cache.tzst -P -C /home/runner/work/_temp/magic-nix-cache-d1683dda-dead-4a71-9f90-d0021f0ce180 --use-compress-program unzstd
  Cache restored successfully
Received 24214094 of 24214094 (100.0%), 23.1 MBs/sec
FlakeHub Cache is disabled due to missing or invalid token
If you're signed up for FlakeHub Cache, make sure that your Actions config has a `permissions` block with `id-token` set to "write" and `contents` set to "read"
Waiting for magic-nix-cache to start...
Error: --flakehub-api-server-netrc is required when determinate-nixd is unavailable

Not sure what happened recently but after the error the rest of my workflow proceeded as normal. However as I am relying on the cache to function as a HTTP cache, my subsequent (parallel) steps started to fail since the CI has limited resources.

My last known run with a working cache was 17 September 2024 04:13 GMT+0 so I suspect that there was an errorneous commit between now and then.

dustyhorizon commented 1 month ago

@colemickens thanks for pointing that out, interim I have specified source-revision to the most recent release prior to that commit listed in #87

colemickens commented 1 month ago

This should be resolved! You can unpin now @dustyhorizon. Thanks for the report!