All your Jupyter kernels, on all your machines, in one place.
Launch Jupyter kernels on remote systems and through batch queues so that they can be used within a local Jupyter noteboook.
.. image :: https://raw.githubusercontent.com/tdaff/remote_ikernel/master/doc/kernels.png
Jupyter compatible Kernels start through interactive jobs in batch queue systems (SGE, SLURM, PBS...) or through SSH connections. Once the kernel is started, SSH tunnels are created for the communication ports are so the notebook can talk to the kernel as if it was local.
Commands for managing the kernels are included. It is also possible to use
remote_ikernel
to manage kernels from different virtual environments or
different python implementations.
Install with pip install remote_ikernel
. Requires notebook
(as part
of Jupyter), version 4.0 or greater and pexpect
. Passwordless ssh
to the all the remote machines is also recommended (e.g. nodes on a cluster).
.. warning::
remote_ikernel
opens multiple connections across several machines
to tunnel communication ports. If you have concerns about security or
excessive use of resources, please consult your systems administrator
before using this software.
.. note::
When running kernels on remote machines, the notebooks themselves will
be saved onto the local filesystem, but the kernel will only have access
to filesystem of the remote machine running the kernel. If you need shared
directories, set up sshfs
between your machines.
.. code:: shell
pip install remote_ikernel
.. code:: shell
remote_ikernel manage
.. code:: shell
remote_ikernel manage --add \ --kernel_cmd="ipython kernel -f {connection_file}" \ --name="Python 2.7" --cpus=2 --pe=smp --interface=sge
.. code:: shell
remote_ikernel manage --add \ --kernel_cmd="/home/me/julia-903644385b/bin/julia -i --startup-file=yes --color=yes /home/me/.julia/v0.6/IJulia/src/kernel.jl {connection_file}" \ --name="IJulia 0.6.0" --interface=ssh \ --host=me@remote.machine --workdir='/home/me/Workdir' --language=julia
.. code:: shell
remote_ikernel manage --add \ --kernel_cmd="/home/me/Virtualenvs/dev/bin/ipython kernel -f {connection_file}" \ --name="Python 2 (venv:dev)" --interface=local
.. code:: shell
remote_ikernel manage --add \ --kernel_cmd="ipython kernel -f {connection_file}" \ --name="Python 2.7" --cpus=4 --interface=slurm \ --tunnel-hosts gateway.machine cluster.frontend
The kernel spec files will be installed so that the new kernel appears in
the drop-down list in the notebook. remote_ikernel manage
also has options
to show and delete existing kernels.
When working with remote machines, each kernel creates two ssh
connections. If you would like to reduce that, you can set up automatic
multiplexing of connections. For each machine, add a configuration to your
~/.ssh/config
:
.. code::
Host myhost.ac.uk ControlMaster auto ControlPath ~/.ssh/%r@%h:%p ControlPersist 1
This will create a master connection that remains in the background when and multiplex everything through that. If you have multiple hops, this will need to be added for each hop. Note, for the security conscious, that idle kernels on multiplexed connections allow new ssh connections to be started without a password.
--tunnel-hosts
. When given, the software will try to create
an ssh tunnel through all the hosts before starting the final connection.
Allows using batch queues on remote systems.SSH_ASKPASS
it will be used
to ask the user for a password.--launch-cmd
can be used to override the command used to launch the
interactive jobs on the cluster, e.g. to replace qlogin
with qrsh
.notebook
package. Use an earlier
version if you need to use IPython 3.--verbose
if used as a kernel option.pip
requirements enforce versions less than 4. Use a more recent version
to ensure compatibility with the Jupyter split.qsub -I
.--remote-launch-args
can be used to set qlogin
parameters or similar.--remote-precmd
allows execution of an extra command on the remote host
before launching a kernel.--verbose
option for debugging.rik_
.{connection_file}
argument.remote_ikernel manage --show
command to show existing kernels.--workdir
.kernel-uuid.json
is copied to the working director for systems where
there is no access to the frontend filesystem.