dask / distributed

A distributed task scheduler for Dask
https://distributed.dask.org
BSD 3-Clause "New" or "Revised" License
1.58k stars 719 forks source link

AMM: Increase data transfer priorities for graceful worker retirement #7183

Open crusaderky opened 2 years ago

crusaderky commented 2 years ago

Active Memory Manager (AMM) data transfers run on hardcoded priority 1:

https://github.com/dask/distributed/blob/91487706e8a1d1d4ff369031839e111707273e73/distributed/worker_state_machine.py#L2822-L2826

This means that if there is a network-heavy workload going on on the default priority 0, to the point that it saturates the network, then AMM will yield and slow down in order not to hamper the workload.

This is generally desirable for general purpose rebalancing and replication. There are two use cases however where it risks being a poor idea:

  1. Graceful worker retirement (AMM RetireWorker) happens chiefly in three cases:

    • whenever a watchdog has intel that the worker is going to die soon. Namely, on AWS you get a 2 minutes warning when an instance is going to be forcefully shut down by Amazon. In this case, graceful retirement is time sensitive, and should be prioritised over computations.
    • on an adaptive cluster, when the workload has dwindled to the point where it can't saturate the cluster anymore. In this case we should expect modest data transfers from the computation, so it shouldn't hurt to raise AMM priority anyway.
  2. Graceful worker retirement will try pushing all the unique data out of a worker at once and hang indefinitely as soon as there's no more capacity anywhere else on the cluster, e.g. the retirement causes all surviving workers to go beyond 80% memory and get paused. If there's a hard shutdown incoming after a certain time, this will mean losing any remaining data and having to recompute it. However, not all data on a worker is equal: task outputs can be recomputed somewhere else; scattered data can't and will cause all computations that rely on it to fall over.

Proposed design

crusaderky commented 2 years ago

The easiest way to cause priority to matter due to "network saturation" (quotes are in order) is to retire a worker in the middle of a spill-heavy computation, since the time the peer workers use to spill data is time where the tasks are still in flight. This would be mitigated by https://github.com/dask/distributed/issues/4424, but not solved because the same ticket would cause peer workers to hit the pause threshold much faster.

fjetter commented 2 years ago

AMM RetireWorker should replicate with priority -2 for scattered data and -1 for all other data (both are higher than default compute() calls)

+1

The easiest way to cause priority to matter due to "network saturation" (quotes are in order) is to retire a worker in the middle of a spill-heavy computation

Why would it not matter in other circumstances? A very network heavy workload (e.g. a shuffle) would also block all network even w/out spilling, wouldn't it?

crusaderky commented 2 years ago

The easiest way to cause priority to matter due to "network saturation" (quotes are in order) is to retire a worker in the middle of a spill-heavy computation

Why would it not matter in other circumstances? A very network heavy workload (e.g. a shuffle) would also block all network even w/out spilling, wouldn't it?

It matters. With spill/unspill it's just easier to build a fetch queue that is several minutes long.

crusaderky commented 2 years ago

Worth noting that we just merged a PR (#7167) that uploads the length of the fetch queue to prometheus. We should start monitoring that. Whenever we observe the fetch queue becoming very big is a use case where today AMM RetireWorker would lag behind.