Open OliverEvans96 opened 7 years ago
By following http://ipyparallel.readthedocs.io/en/latest/process.html#starting-the-controller-and-engines-on-different-hosts we can do the following:
Start a controller on login
Copy the json config to compute
Start the engine on compute
, specifying the json config. The engine and controller will communicate & register.
Open a notebook on login
.
Run:
import ipyparallel as ipp
rc = ipp.Client()
%autopx
Any following commands will be run on compute
.
login
must have an ip address visible to compute
. This is fine on a cluster, but if login
is a laptop behind a router, then we'll have to be more creative.Out[0:{num}]:
will be prepended to every output (where {num} is the output number on compute
)login
has no variables defined or modules imported.@shreddd said that @rcthomas has been working on spawning remote kernels for notebooks, which would be an excellent solution here depending on how far along the idea is.
Not sure yet if this is compatible with existing NERSC constraints, but worth checking on: https://bitbucket.org/tdaff/remote_ikernel
Great find! This looks like basically the idea we're going for.
We would like to run a notebook server on a login and launch a notebook which is connected to a kernel on a compute node.