Open dangom opened 8 months ago
I don't think screen or tmux requires any special abstraction and should work. At least it works fine for me. You do however have to kill the existing kernel REPL buffer that was attached to the currently broken session. This will force emacs-jupyter to reestablish a fresh connection.
Thanks for the reply. Mind sharing how you get that to work? Is it as simple as adding something to one's ssh config and calling jupyter-connect-repl?
Yes, I am happy to share configuration details and tips. Can you give more information about what you are trying to connect to? Is the remote machine ports accessible from the subnet your computer you are trying to connect? Does it have ssh?
You start the ipython kernel server, and then you bind to the network ports. You can use the ssh server to channel the traffic through the ssh. Routing through ssh is preferred because your traffic is encrypted.
Screen or tmux should not matter in this workflow.
Hi pati-ni, thanks a ton.
I'm trying to connect to a server at work. If I'm at work then the machine ports are accessible from the subnet, if not I tunnel in via a gateway computer via ssh, but since ssh's ProxyJumps are free I usually go through the gateway even when at work. My config looks something like this:
Host entry
User user
HostName entry.example.com
IdentityFile ~/.ssh/id_rsa
Host server
User user
ProxyJump entry
HostName server.example.com
And my workflow usually consists of opening a file with Tramp, as in /ssh:server:/path/to/file.py
and from there running jupyter-run-repl
.
My issue is that if I lose connection to the server then I have to start the REPL again. Not the end of the world, but sometimes that interferes with the work I'm doing.
The question is then whether some ssh/config incantation would solve the problem by having the jupyter-run-repl
create a repl behind a tmux pseudo-terminal. If so, I'd also be curious how I could connect to it in case I do lose connection, i.e., how to make sure I can easily ID the correct connection files.
Ah makes sense. No, I have no idea how to do what you are asking but I can tell you how I do things.
What I am doing is managing the session behind screen
myself.
Then session auto-spawns when I execute an org-mode
1) ssh to your host
2) start a python kernel: I start it using ipython kernel -f hpc.json
. (Let me know if you are not using python and I will send a more generic command). You can start it through screen, it really makes no difference.
3) Locate the hpc.json
file in the remote machine and copy it to your local machine. Mine is located in ~.local/share/jupyter/runtime/hpc.json
. You may have to do that step once but I scp it to ensure I have the most updated one.
4) Use jupyter console and the json file to connect to the remote session jupyter console --existing=$HOME/hpc.json --ssh server
5) Upon success of 4 you will see something like this
[ZMQTerminalIPythonApp] Forwarding connections to 10.129.82.69 via eris
[ZMQTerminalIPythonApp] To connect another client via this tunnel, use:
[ZMQTerminalIPythonApp] --existing hpc-ssh.json
Jupyter console 6.6.3
Python 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.8.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
Here is the script I am using from my local machine before connecting to the kernel through emacs-jupyter:
scp server:.local/share/jupyter/runtime/hpc.json ~/
jupyter console --existing=$HOME/hpc.json --ssh server
6) The hpc-ssh.json
file contains what you want. It is generated at the same directory with the hpc.json
. I usually execute a src block with the right session parameters. Alternatively, you can do M-x jupyter-connect-repl
and specify the location of the hpc-ssh.json file.
This may not be the most efficient way to do that but with somewhat sensible automation, it should not be too tedious.
If you want to do a proxyjump through ssh you may have to tweak some port forwarding to make it work right. I would say first establish it without the proxy jump and then add this layer on top of that.
Oh that's great! Works also at my end. Does require a good minute of manual work, but I'll see if I can automate this when I have some more bandwidth and will post results here.
Yes, it takes a good minute of work, but well worth it if the state of your session takes longer to achieve. I use org-mode src blocks so with the correct header I do not need to perform 6).
* Test
:PROPERTIES:
:header-args:python: :async yes :session <location of your hpc-ssh.json> :noweb yes :pandoc t :kernel python
:END:
#+begin_src python
## C-c c in here will autospaws
#+end_src
The documentation mentions, that it is possible to connect to a remote kernel on a notebook server by specifying
/jpy::<kernel-id>
, but for me it was not so simple. When try to connect to an existing kernel on a notebook server that way emacs-jupyter checks jupyter-server-kernel-names
for existing kernels it knows about. I've found this variable is filled by two actions:
jupyter-server-list-kernels
The first one creates a new kernel so that is not what I want, but the second one can be used to connect to an existing kernel
1) Connect to the existing kernel with jupyter-connect-server-repl
2) Rename the kernel to some name <kernel-name>
in jupyter-server-list-kernels
(with R
)
3) Use :session /jpy::<kernel-name>
as session parameter for your org session.
With these steps it was possible for me to connect to an existing kernel on a notebook server
Hi,
sometimes when using the jupyter REPL I lose connection to my remote server and I am thus forced to restart the kernel. I was wondering whether it'd be possible to keep the REPL alive by running it behind tmux and re-connecting to it in case a network connection goes down. This would be amazing to preserve state.
Has anyone attempted to implement anything like that, or knows how one could attempt to do so?