pantsbuild / pants

The Pants Build System
https://www.pantsbuild.org
Apache License 2.0
3.3k stars 634 forks source link

Backtracking for missing Digests (aka: remove "eager fetch") #11331

Closed stuhood closed 2 years ago

stuhood commented 3 years ago

When a Digest is missing from the Store, we don't currently have a way to backtrack far enough to re-compute it.

Backtracking the appropriate amount is challenging to do via exception handling -- which @rules don't have currently, but which is still useful to discuss. No particular @rule has enough information on its stack to know the true source of a Digest (which might be deeply nested below one of its await Gets or arguments), and even if it did determine which dependency was the source of the bad Digest, the @rule would need an API to invalidate the relevant subgraph.

Instead, we should lean further in to the fact that the NodeOutput type can report the Digests that it computed, and trigger Graph::invalidate_from_roots for the Node(s) that produced the missing Digest. This will naturally dirty the nodes that depend on those Digests, causing them to be canceled and re-run.

From an implementation perspective, some alternatives:


TODO: A blocker for this issue being a solution to the "fall back to local execution" problem is determining what will happen differently on the second run of a Process, and how that different behavior will be triggered. Simply re-running the original Node again is likely to result in the exact same output: for example, if the Digest was produced by a cache lookup, re-looking up in the cache will probably produce the same Digest again.

So when invalidating, we'd also need to decide how to affect the second run of the Process. A few potential options there:

stuhood commented 3 years ago

Doing some portion of #11330 might be a prerequisite for this: if we backtracked to retry a Process node (for example), the Runner might need to execute differently than it did the first time: ie, it might need to actually run the Process in the case of a remote_cache::Runner.

EDIT: And in fact, determining what would happen differently on the second run of an invalidated Process is probably a blocker for treating this issue as a solution to the problem rather than #11330.

stuhood commented 3 years ago

The minor caveat that I had added in my previous comment is actually fairly substantial: I've expanded the section of the ticket description below the line with a big TODO. #11330 is probably not a prerequisite for this one, but I'm confident that it is much smaller, and so a good idea to do first.

stuhood commented 3 years ago

It would also be good to justify implementing backtracking by confirming that it reduces the total number of bytes downloaded by clients.

stuhood commented 2 years ago

It would also be good to justify implementing backtracking by confirming that it reduces the total number of bytes downloaded by clients.

Based on production experience, this aspect is unlikely to be a significant benefit, since users are seeing relatively little actually being downloaded in most cases. The larger benefit will be reducing the total number of requests, which are concurrency bounded, and introduce round trip time.

And in particular, it looks like the concurrency limit for access to the Store does actually impact total runtime for fully cached usecases, because the eager fetch of cache entries ends up taking a non-trivial amount of time.

stuhood commented 2 years ago

To gauge the benefit of this change, I set remote_cache_eager_fetch=false in pantsbuild/pants CI and another private repository.

With eager_fetch=False:

Investigating the strange number of bytes downloaded in the private repo, I determined that the reason for the increased byte count is that we don't dedupe load_bytes_with calls (which we did for uploads in #12087). I've opened #15524 about that.

Lazily fetching bytes means that we won't fetch the outputs of processes, but we still do need to fetch the inputs of processes in order to speculate... and in cases where lots of processes depend on a large input, that could mean lots of redundant fetching (whereas with eager fetching, all downstream processes wait for the producing process to complete and download its output). It also means more total calls to load_bytes_with: the number of unique load_bytes_with calls with eager_fetch=False is ~50% lower.

EDIT: Oh, and perhaps most importantly: pantsbuild/pants CI completed 15% faster with eager_fetch=False.

stuhood commented 2 years ago

15761 takes a first step here. The rest of the change should be roughly as described above, with one other detail: in order to retry the synchronous methods of interface.rs which are used in @goal_rules, we will actually need to propagate MissingDigest through Python code, and re-catch it at the @rule boundary. Should be possible, but I didn't immediately see how to add a field to a custom exception type with pyo3.

stuhood commented 2 years ago

Ok, the full stack of PRs for backtracking is now posted:

  1. https://github.com/pantsbuild/pants/pull/15761
  2. https://github.com/pantsbuild/pants/pull/15854
  3. ~https://github.com/pantsbuild/pants/pull/15856~
  4. https://github.com/pantsbuild/pants/pull/15850
  5. https://github.com/pantsbuild/pants/issues/15867

While #15524 will be important for performance, it doesn't affect the safety of eager_fetch=False, and should be cherry-pickable.