Closed haraldschilly closed 3 months ago
As is, this issue doesn't really make sense, because if you kill the kernel process, the local_hub will just immediately start another one. We would really need the local http server (or something?) to allow you to "close and halt" a notebook.
As is, this issue doesn't really make sense, because if you kill the kernel process, the local_hub will just immediately start another one. We would really need the local http server (or something?) to allow you to "close and halt" a notebook.
I did some experiments. After stop the process, a new process didn't start until I execute codes in the notebook. If I just leave the notebook there or edit without executing, there is no new kernel started. I think this is reasonable?
I did some experiments. After stop the process, a new process didn't start until I execute codes in the notebook.
Cool -- I didn't realize that! Yes, that seems like reasonable behavior.
William
As an addition question, I noticed that there are also other modules besides jupyter, such as terminal, markdown, latex editor, etc. Are they working in the same way as jupyter? I mean, if the issue was solved, can we track the pid corresponding to a file no matter it is .ipynb, .term, or .md ?
Are they working in the same way as jupyter?
No.
I mean, if the issue was solved, can we track the pid corresponding to a file no matter it is .ipynb, .term, or .md ?
There's no separate process for most file types (e.g., markdown, latex etc.). The state in memory used by these files is cleaned up after all users close the file. There's no automatic idle timeout for them.
There are pid's for each terminal (and each frame if you split one for a terminal). Killing the terminal pid will definitely spawn another process after a short pause. There's no specific idle timeout for this -- they just stay running as long as the project runs.
Closing with a similar remark to https://github.com/sagemathinc/cocalc/issues/3884
This would be an interesting idea to make part of the API. There's not likely to be much interest, since in 5 years after making these tickets, there's been no activity. But we'll see.
From within the command line, it is hard to know which jupyter notebook instance corresponds to which file. The goal of this ticket is to make this easier, and my idea is to add an endpoint to that project server, that returns a JSON datastructure exposing the internal state of the project hub related to these notebooks. Then, someone can use that (via
jq
or whatever) to clean up old notebook instances, etc.Req: ZD 6680