Open sperezconesa opened 4 years ago
I've used SSH to spawn JupyterLab running JupyterLab-Slurm remotely (so the JupyterLab server wasn't on localhost, and localhost's connection to the JupyterLab server was via SSH), and so in that case the HTTP requests made by JupyterLab-Slurm to the backend server extension, which then ran the Slurm commands remotely, weren't via SSH, but it was still possible to use JupyterLab-Slurm "with SSH" in that sense.
Would that (remote Slurm + remote JupyterLab, both via SSH) be good enough for you?
You just need to use port forwarding when creating the SSH connection. I.e. if the remote JupyterLab exposes port 8888, then use SSH port forwarding to forward the remote port 8888 to the local port 8888. https://www.ssh.com/ssh/tunneling/example This was actually how I originally did testing and development work for this project.
That being said I imagine that remote Slurm + remote JupyterLab is probably not a good enough solution for you, or else you wouldn't have opened this issue for local JupyterLab + remote Slurm via SSH.
In any case, the feature request you mention also sounds very reasonable to me as well. The frontend lab extension doesn't issue or run any of the Slurm commands; all of that is done by the backend server extension, using asyncio.create_subprocess_exec
:
https://github.com/NERSC/jupyterlab-slurm/blob/master/jupyterlab_slurm/slurm.py#L11
So I suppose that, in order to run these Slurm commands over/via SSH, the code for ShellExecutionHandler
would have to be modified somehow to support sending commands over SSH.
I will be honest that although I feel I have a strong idea of where changes would need to be made, I have little to no idea how they could be implemented. In particular, how it could be possible to securely store the user's SSH credentials, and how to get them from the user. I suppose the credentials could be stored as an environment variable, but that probably wouldn't be very secure?
And then it would seem to be necessary to implement a UI feature to request the user's SSH credentials. Or do you think it would be enough to request the user to manually set the environment variables via the command line? (Even though having users use the command line to do anything seems to defeat the purpose of JupyterLab-Slurm.)
In any case there does seem to be a Python library implement SSH in an asyncio
-compatible manner, https://asyncssh.readthedocs.io/en/latest/ , although I have no prior familiarity with it.
Given all of my responsibilities for other projects, I honestly probably don't have the time to implement this, if only because my substantial ignorance about asynchronous SSH with Python means that it would take me much longer to implement than someone knowledgeable with this would require.
If you know anything more about this, or anyone else who does, we would be more than happy to accept a PR implementing this! (I realize though that likely isn't a very satisfying answer for you.)
Yes. The thing I am interested in is the complicated feature unfortunatelly. I would love to have the knowledge to help you... The only thing I can really do is to thank you for the application because its great and I look forward for this possible new feature.
Hi
I am also in the same situation, and would more see the possibility to execute Slurm REST APIs (instead of SSH) to perform the actions needed by the plugin. In this use case, you will just have to provide a JWT token stored locally within your local files and the jupyter-lab-slurm plugin would just use that to get/post/update its data. Would that be a better solution?
Hello, Is it possible to run locally jupyter-lab but through ssh the slurm of a cluster? If it is not, this would be a wonderfull feature. Best, Sergio