Open crusaderky opened 2 years ago
The easiest way to cause priority to matter due to "network saturation" (quotes are in order) is to retire a worker in the middle of a spill-heavy computation, since the time the peer workers use to spill data is time where the tasks are still in flight. This would be mitigated by https://github.com/dask/distributed/issues/4424, but not solved because the same ticket would cause peer workers to hit the pause threshold much faster.
AMM RetireWorker should replicate with priority -2 for scattered data and -1 for all other data (both are higher than default compute() calls)
+1
The easiest way to cause priority to matter due to "network saturation" (quotes are in order) is to retire a worker in the middle of a spill-heavy computation
Why would it not matter in other circumstances? A very network heavy workload (e.g. a shuffle) would also block all network even w/out spilling, wouldn't it?
The easiest way to cause priority to matter due to "network saturation" (quotes are in order) is to retire a worker in the middle of a spill-heavy computation
Why would it not matter in other circumstances? A very network heavy workload (e.g. a shuffle) would also block all network even w/out spilling, wouldn't it?
It matters. With spill/unspill it's just easier to build a fetch queue that is several minutes long.
Worth noting that we just merged a PR (#7167) that uploads the length of the fetch queue to prometheus. We should start monitoring that. Whenever we observe the fetch queue becoming very big is a use case where today AMM RetireWorker would lag behind.
Active Memory Manager (AMM) data transfers run on hardcoded priority 1:
https://github.com/dask/distributed/blob/91487706e8a1d1d4ff369031839e111707273e73/distributed/worker_state_machine.py#L2822-L2826
This means that if there is a network-heavy workload going on on the default priority 0, to the point that it saturates the network, then AMM will yield and slow down in order not to hamper the workload.
This is generally desirable for general purpose rebalancing and replication. There are two use cases however where it risks being a poor idea:
Graceful worker retirement (AMM RetireWorker) happens chiefly in three cases:
Graceful worker retirement will try pushing all the unique data out of a worker at once and hang indefinitely as soon as there's no more capacity anywhere else on the cluster, e.g. the retirement causes all surviving workers to go beyond 80% memory and get paused. If there's a hard shutdown incoming after a certain time, this will mean losing any remaining data and having to recompute it. However, not all data on a worker is equal: task outputs can be recomputed somewhere else; scattered data can't and will cause all computations that rely on it to fall over.
Proposed design
replicate
suggestions and default to 1 if omitted{key: priority}
attribute toAcquireReplicasEvent