cubed-dev / cubed

Bounded-memory serverless distributed N-dimensional array processing
https://cubed-dev.github.io/cubed/
Apache License 2.0
117 stars 14 forks source link

HPC executor #467

Open tomwhite opened 4 months ago

tomwhite commented 4 months ago

Some possibilities:

cc @TomNicholas

TomNicholas commented 4 months ago

The main NCAR machine I have access to uses PBS - perhaps there is an equivalent of slurm job arrays for PBS?

TomNicholas commented 3 months ago

I've been looking into this more, and all of these look potentially interesting to us!

perhaps there is an equivalent of slurm job arrays for PBS?

There is, and it looks very similar. Perhaps a PBSExecutor and SlurmExecutor could both inherit from an abstract JobArrayExecutor...

I actually started sketching this out, borrowing heavily from dask-jobqueue, which defines common base classes for different HPC cluster classes.

But got stuck when I realised that:

  1. I need a way to propagate both the environment and the context of an arbitrary function (e.g. in apply_gufunc) to the bash scripts that are each job,
  2. It might be really slow to have to wait for the queuing system to start running the tasks between every stage. Ideally once we have been allocated resources we don't really want to release them - with cloud serverless this doesn't matter so much as the time between requesting a worker and getting one is extremely short, but on a HPC queue it could literally be hours. A lot of real-world workloads are going to require the most resources at the start - could we imagine that e.g. a reduction with 3 rounds gets 100 workers, then keeps hold of 10 for the next round, then keeps hold of 1 for the final round?

funcX: https://funcx.org/

funcX has recently become Globus Compute, a new offering from the same people who run the widely-used Globus file transfer service. The Globus Compute Executor class is a subclass of Python’s concurrent.futures.Executor, so presumably could be used similarly to the existing ProcessesExecutor?.

Globus compute requires an endpoint to be set up (i.e. on the HPC system of interest), and it's pretty new so I don't know if it will be setup anywhere yet. Then it requires a Globus account (is that free?). There is a very small public tutorial endpoint we could try using for testing though.

Parsl

This seems quite similar to Globus Compute, except that it uses a decorator-centric python API, a bit like Modal stubs. It also requires setup, but it's already available on two large machines we use at [C]Worthy (Expanse and Perlmutter), which I might be able to get access to (full list of machines here). Perlmutter has a nice docs page about how to use Parsl on their system.

It solves the function context issue by restricting to only knowing about local variables within the function - see docs.

This feature also sounds really useful:

[Parsl] avoids long job scheduler queue delays by acquiring one set of resources for the entire program and it allows for scheduling of many tasks on individual nodes.


I would like to try one or two of these out. If we can get Cubed to run reasonably well on HPC that would be a big deal, and worth advertising on its own.

mgrover1 commented 2 months ago

After some discussion at SciPy, globus-compute seems like a solid option for executing functions on HPC.

Here is a link to an cookbook using it, remotely submitting jobs to an HPC cluster at a DOE lab

It might be helpful to use this in addition to parsl, to allow more function-based resource specification, as they show on the docs here

Happy to discuss more with you all - I hope this provides a good starting point!

TomNicholas commented 2 months ago

@jakirkham what was the library you were talking about earlier with the concurrent futures-like API but for HPC?

jakirkham commented 2 months ago

Was thinking of @applio's team's work on Dragon, which provides a multiprocessing style API

AIUI this could be used with concurrent.futures or Parsl (mentioned above). Though Davin would be able to say more about how to use it

https://github.com/DragonHPC/dragon

negin513 commented 2 months ago

Essentially there are two pieces needed:

TomNicholas commented 2 months ago

I chatted with @applio earlier and he also mentioned some kind of distributed dict, which we could potentially use as the storage layer via Zarr. Was that a part of dragon too @applio?

jakirkham commented 2 months ago

Ah forgot to mention that MPI4Py also contains concurrent.futures.Executors

tomwhite commented 2 months ago

I added some notes on how to write a new executor in #498.