Open digitalsignalperson opened 1 year ago
I'm not sure what the original intention was for jupyter kernel
, but its behaving exactly as its coded. :smile:
It only terminates upon reception of the either SIGTERM
or SIGINT
after it shuts down the actual kernel process. To accommodate detection of the actual kernel process shutting down via an external process (e.g., jupyter console
or the python script you list above), the jupyter kernel
application would need to monitor the actual kernel process using KernelManager.poll()
and I'm not sure that's what we'd want. For example, if monitoring were added, and the external process wanted to restart the kernel started by jupyter kernel
, jupyter_kernel
's monitor would detect the actual kernel process had exited and terminate, yet the second half of the restart would result in another "actual kernel process".
If you're worried about ZMQ ports getting leaked, the actual kernel process is shut down. Only its launching application (i.e., jupyter kernel
) remains running until its terminated via either of the two signals.
((Frankly, I don't know what purpose jupyter kernel
serves. I suppose its meant to allow other applications to ONLY submit messages - because it only manages lifecycle via SIGTERM
or SIGINT
.))
Hmm I see. That's what I ended up settled on, storing a .pid file alongside the .json connection file, and killing the process later when done with it (although with that I didn't bother doing a km.shutdown() which probably isn't graceful).
The way I'm using jupyter kernel is to manage a persistent python kernel in my terminal of choice, send commands or stdin to it from my terminal prompt, and print the stdout/stderr back into the terminal. https://github.com/digitalsignalperson/comma-python/blob/main/%2Cpython And I have a method to either kill or restart the kernel when needed, which is why I wondered if shutting it down was supposed to terminate the process or not.
If this is an ok place to ask, I also ran into sometimes after creating a new kernel, calling km.execute(to_execute) immediately never results in a km.iopub_channel.msg_ready() returing True. I'm not sure what the correct way to deal with that is, if there's some method to check and wait for before trying to execute something. Notes here: ,python#L70
and killing the process later when done with it (although with that I didn't bother doing a km.shutdown() which probably isn't graceful).
If you "kill" the processs using SIGTERM
(i.e., kill pid
) and not SIGKILL
(i.e., kill -9 pid
) then the signal handler should shutdown the kernel - all good.
If this is an ok place to ask, I also ran into sometimes after creating a new kernel, calling km.execute(to_execute) immediately never results in a km.iopub_channel.msg_ready() returing True. I'm not sure what the correct way to deal with that is, if there's some method to check and wait for before trying to execute something.
I believe the best way to ensure a kernel is ready to receive execution requests is to complete a kernel_info_request
(and kernel_info_reply
) sequence - at least this is what the server does. Since this may lead to other questions (and my kernel protocol knowledge is limited), I'm going to preemptively add @JohanMabille as he's got this stuff down.
If you "kill" the processs using
SIGTERM
(i.e.,kill pid
) and notSIGKILL
(i.e.,kill -9 pid
) then the signal handler should shutdown the kernel - all good.
thanks, I've switched to SIGTERM
I believe the best way to ensure a kernel is ready to receive execution requests is to complete a
kernel_info_request
(andkernel_info_reply
) sequence - at least this is what the server does. Since this may lead to other questions (and my kernel protocol knowledge is limited), I'm going to preemptively add @JohanMabille as he's got this stuff down.
If I remove my 1 second sleep before I try to execute something and add the kernel info request, it now similarly does not get the response from the kernel_info_request sometimes
The code is doing more or less this:
cf = jupyter_client.find_connection_file(connection_file_path)
km = jupyter_client.BlockingKernelClient(connection_file=cf)
km.load_connection_file()
km.kernel_info()
while True:
if km.iopub_channel.msg_ready():
# Sometimes msg_ready() never returns True, other times I do see the kernel_info_request
e.g. sometimes this works
,python --new "import numpy as np"
Killed kernel with pid 376602
Started kernel with pid 401469
soometimes it doesn't
,python --new "import numpy as np"
Killed kernel with pid 401469
Started kernel with pid 401919
No messages from kernel
Couldn't get kernel info
I see that your repo references jupyter_client == 8.0.3
. You might also see if jupyter_client < 8
behaves differently.
The SUB socket of the client can take time to connect to the IOPUB channel, and the client can miss important messages (especially those with the kernel status). The current workaround implemented in different clients is to "nudge" the kernel, i.e. send requests until the SUB socket is connected and able to receive the "idle" status message (i.e. having km.iopub_channel.msg_ready()
returning True
). You can find more detail in this issue.
The next version of the protocol will fix this issue "by design", using a socket that broadcasts a message when it receives a new connection. Clients can wait for this message on iopub (which is guaranteed to be delivered by ZMQ) before sending requests to the message.
You can find the detail of this JEP here. The JEP has been accepted, but not implemented yet.
If I do
and then e.g.
the original
jupyter kernel
process doesn't exit.Same thing if I shut it down like this
my setup on arch linux with: