Open davidbrochart opened 2 years ago
I think this is orthogonal to the discussion in #797. The idea here is to be able to communicate with the kernel while the shell channel is busy, without sending a message on the control channel.
A possible implementation is to have the kernel open a new channel, similar to shell (thus opening a new ZMQ socket), running in its own thread. This could be done via a new message sent on the control channel (open_new_shell), the response would contain the channel information (name and port basically). On the frontend side, a new client (such as a console) with a visual indication that it's communicating via a different shell channel would be opened. When the client is closed, the kernel close the channel.
However, this raises some questions:
An alternative implementation would be to use the sam SHELL socket but have different UUIDs for the shell the client wants to target. The advantage of this solution is that it is transparent for the server, but it increases the complexity of the kernel since it has to handle the routing. Besides, the shell execution can not be done on the main thread anymore (since it must remain available for polling messages on the socket and route them to the appropriate "shell" thread), which could breaj the debugger (this has to be investigated).
In both cases, this requires changing the kernel protocol and that would deserve a detailed JEP that might be prematured at this point.
Thanks for the feedback @JohanMabille, this is interesting.
When the client is closed, the kernel close the channel.
I guess we would need a new message on the control
channel to close the channel, such as shell_close_request
with the name of the shell channel in the content
.
How would the mapping "channel / new ZMQ socket" be set in the server? What would trigger that?
I think that once the front-end receives the reply to the open_new_shell
request, with the name and the port of the channel, it should make a request to the server at a new endpoint, such as POST /api/shells
with the channel name and port in the body. The server can then create the ZMQ socket and can route messages between this socket and the front-end.
An alternative implementation would be to use the sam SHELL socket but have different UUIDs for the shell the client wants to target.
That was my initial idea, but I think we should start with your idea of having a new shell socket, which seems simpler. I will open PRs in ipykernel, jupyter-client and jupyter-server, and start experimenting.
Discussing #797 with @SylvainCorlay, we thought that it might not tackle the issue at the right level. Instead of enabling background execution on a per-message basis, maybe we should execute a new kernel entirely in a new thread. Something like "kernel forking", although it might not be the right word since it suggests that it would be another process ("sub-kernel" might be a better word). I think we need the following:
subkernel_request
on thecontrol
channel. If the control channel runs in its own thread (as it is the case in ipykernel), then a sub-kernel can be launched even when the main kernel is busy running a cell. The sub-kernel can then be used to spy on the main thread, for instance to inspect a result while it is being computed.subkernel_reply
contains asession
UUID that the client must use in further requests in order to target this sub-kernel. A new client can now be created for the same kernel, re-using the same ZMQ channels. It's currently already possible to use a kernel from an existing session in JupyterLab, which is very similar except that we would need to use the given session UUID to target our sub-kernel.shell
andstdin
ZMQ channels in the kernel should be read in another thread and dispatched in per-kernel queues identified by their session UUID, that each sub-kernel would consume. This will allow a sub-kernel to process messages when the main kernel is busy.I'm not sure this architecture would work, maybe I'm missing something. I would love to have feedback from @SylvainCorlay, @JohanMabille, @minrk, @jasongrout and others.