dask / distributed

A distributed task scheduler for Dask
https://distributed.dask.org
BSD 3-Clause "New" or "Revised" License
1.58k stars 718 forks source link

Remove data spilling that acts on `target` threshold #8820

Open fjetter opened 3 months ago

fjetter commented 3 months ago

We currently have two distinct spill-to-disk mechanism

  1. If a task completes and the memory of a worker is above target it write the data to disk immediately, i.e. this is basically a LIFO policy. Note: This functionality is implemented in zict
  2. If during a recurring (i.e. PeriodicCallback) check, the worker memory is above spill we are starting to write data to disk based on a simple LRU policy.

I believe we should get rid of the entire 1.) mechanism, i.e. deprecate the target threshold and disable implict/automatic zict based spilling entirely.

First of all, this mechanism is confusing and I believe very few people actually understand the differences, let alone know which value to increase when and why.

More importantly, though, the first and second mechanism are working on contradicting eviction policies. While it's non-trivial to tell which policy is the best it is not helpful to have two contradicting policies live.

I am suggesting to remove target over spill for a couple of different reasons

hendrikmakait commented 2 months ago
  • If a task completes and the memory of a worker is above target it write the data to disk immediately, i.e. this is basically a LIFO policy. Note: This functionality is implemented in zict

From what I understand, this mechanism works slightly different from what is described here. If a task completes and the size of its output data is above target, we write the data to disk immediately.

Chances are pretty high that this will evict everything from the LRU on the next callback because overall memory consumption exceeded the spill threshold. Unless there's just a single one of these objects on the worker, it might also run into memory issues after this. Another potential way of dealing with this case would be failing any task that creates output data exceeding target.

hendrikmakait commented 2 months ago

Yet another alternative would be to immediately trigger a new execution of the callback. This would take care of spilling and potentially pausing the worker before another task starts adding even more memory pressure.

fjetter commented 2 months ago

From what I understand, this mechanism works slightly different from what is described here. If a task completes and the size of its output data is above target, we write the data to disk immediately.

Well, you are right. I checked again and I think the logic changed over the years. The LRU logic in zict is defined here https://github.com/dask/zict/blob/7e5dafedcf016a4ac61286badbf1f8da3741d2d3/zict/lru.py#L135-L166 which defines this behavior. An important assumption of this ticket is indeed wrong. As long as that single key we're writing is not above the target threshold (which is a case that probably never actually happens) we are evicting based on LRU policy after all. That explains why we haven't seen any large change in behavior when disabling this.

I still stand by removing this. I feel the encapsulated spilling in zict is adding complexity and the two thresholds are not user friendly.