ipython / ipyparallel

IPython Parallel: Interactive Parallel Computing in Python
https://ipyparallel.readthedocs.io/
Other
2.58k stars 999 forks source link

remote restart ipengine #94

Closed jacksonloper closed 3 years ago

jacksonloper commented 8 years ago

I think this actually a pretty old idea, but I was wondering if there has been any movement on it.

In the same way you can restart a kernel in a notebook, it would be awesome if you could restart an ipengine. My understanding is that this would require a nontrivial rewrite of the engine, involving an entire extra monitoring process that just isn't there right now.

Does this seem like something likely to be implemented? If I took a crack at it, would that be helpful? Or is there another plan...

minrk commented 8 years ago

The plan is to put a nanny process next to each Engine, which would enable remote signalling, restarting, etc. This is a general plan for Jupyter kernels that will be extended to IPython parallel.

minrk commented 8 years ago

The tricky bit for IPython parallel is to not ruin cases like MPI, where the engine itself must be the main process, it cannot be started by the nanny. This means that either the engine starts the nanny the first time, or we special case MPI somehow.

jacksonloper commented 8 years ago

Right. MPI.

Well, at the moment, I can't see reliably handling restarts for MPI with any fewer than 3 processes. Kinda dumb, but here's the picture I have in mind...

We have to allow that there may be at least three computers involved in an mpi situation.

We want to be able to send keyboard interrupt signals to the engine, ergo, the nanny needs to be on the same node as the engine (correct me if I'm wrong). So at the very least, we would need a setup like this.

Now let's say we get a restart signal. We need to kill engine and launch a new engine that is also part of the same MPI universe. We can do this, with, e.g. MPI_Comm_spawn. Trouble is this: MPI may be subject to an arbitrary and cruel resource manager. It may decide to put the new engine on ClusterNodeB. In which case the nanny needs to live on ClusterNodeB. But it doesn't. Failbot.

To deal with this situation, we actually need a third process. The setup now looks like this:

Now if nanny is told to keyboard-interrupt, it talks to the micronanny, that actually sends the SIGINT. If nanny is told to restart, it creates an entirely new (micronanny,engine) pair, which might be on ClusterNodeA and it might be on ClusterNodeB.

Remarks

  1. One downside of this approach is that the engines will have to make a new intracommunicator (i.e. users can't depend on COMM_WORLD). However, I cannot see any way of avoiding this; if you want to be able to start new processes, you need to use some kind of spawn or join. That will create intercommunicators, which need to get merged into intracommunicators. So we'll want to insert some variable COMM_IPWORLD into the namespace, so you can replace MPI.COMM_WORLD.Allreduce with COMM_IPWORLD.Allreduce.
  2. There are certainly situations in which one can guarantee an MPI process will start on the same host. In this case you wouldn't need the megananny. This may even be the common case; I'm not terribly well acquainted with "standard practice." I could do a little survey of the supercomputers I have access to and check. But there are definitely situations in which I don't know how you could make such a guarantee...
  3. I've never actually tried to kill a single node of an intracommunicator forged by repeatedly spawning and merging. It's possible something will explode.
jacksonloper commented 8 years ago

...in conclusion, I hope MPI doesn't block progress on this. When they introduced "dynamic processes management" in MPI2.0, I don't believe they were thinking of a scenario where a single worker could restart.

At the base minimum, every time an mpi process restarts we will need to destroy an old intracommunicator and inject a new one into the ipengine namespace. If users have data structures referencing the old intracommunicator, they will become invalid. Which could be a bit confusing for users :). But perhaps somebody with more MPI-foo can come along and prove me wrong!

In other news, let me know if there's a useful way I could contribute to the kernel-nanny architecture for Jupyter.

minrk commented 8 years ago

I'm 100% okay with MPI engines not being allowed to be restarted, that's not a problem. It's just the parent/child relationship that's an issue, because the mpiexec'd process must be the actual kernel, not the nanny.

jacksonloper commented 8 years ago

Cool. That makes sense.

neuralyzer commented 8 years ago

The engine restart feature would be a really useful. E.g. I use theano on a cluster. Once I import theano a GPU is assigned to the importing process. The only way (I know of) to "free" the GPU again is to terminate/restart the process.

OliverEvans96 commented 6 years ago

Any news here in the last two years?

tavy14t commented 4 years ago

Any news in the last 4 years?

import ipyparallel as ipp
client = ipp.Client()
client.shutdown(targets=range(10, 24), restart=True)

NotImplementedError: Engine restart is not yet implemented

minrk commented 3 years ago

463 lays the groundwork for this to be possible