sagemathinc / cocalc

CoCalc: Collaborative Calculation in the Cloud
https://CoCalc.com
Other
1.14k stars 207 forks source link

jupyter/side-chat: "run" should use the same kernel as the notebook #7555

Open haraldschilly opened 2 months ago

haraldschilly commented 2 months ago

It would be really great if the code evaluation in a side chat of a jupyter notebook uses the same kernel as the main notebook. This could certainly lead to confusions, but more often I run into the situation, where the code in the side chat has something to do with the notebook and it makes sense to have access all already defined variables, packages and functions.

williamstein commented 1 week ago

This certainly goes against the initial design of the side chat. It also conflicts with what Blaec always says in his demos, about the side chat being a sort of test bed, before you do things in the main notebook.

Your suggestion really takes the bad aspects of lack of reproducibility of Jupyter notebooks to another level of even more non-reproducible.

Initially the side chat was ephemeral and didn't even have to do with the current directory. Now the code does run in the current directory, so that's one step toward this. Some questions:

One thing that would be obviously reasonable, I guess, is to have a new kernel called "global" (?), so:

```global
print(a)

Then, no matter what, code eval and highlighting, etc., is always the same kernel as the main notebook, document, etc.   The highlighter could take that into account, etc.   It would be easy to select this "global" kernel in the dropdown.    I have no clue what to do with LLM output, where it's going to explicitly put things like this in.  Would we parse and change them to be global.
2+3


Another bad thing about "global" is that if people use it in markdown cells, it is totally incompatible with normal jupyter notebooks (the highlighting is all wrong).

Anyway, I don't know how to solve this problem.   Maybe just the most naive and simple thing is fine, which is obviously the following: if the language for the markdown cell matches the language of the jupyter notebook, just use the jupyter kernel; otherwise, don't. 

In terms of implementation, this is a radical departure from the entire design of the markdown code evaluation, to put it mildly.  The markdown code evaluation (used in side chat) is stateless (except for the filesystem) and evaluation happens for the entire thread from top to bottom *every time*.   There's absolutely nothing going on explicitly from the point of view of the frontend involving interacting with a kernel.  Instead, the entire thread's code is sent to an API, and the new output is returned (just like with LLM's).   Moreover, there's lots of caching that assumes this.

I'm adding the "unclear" tag, since this ticket is a huge unclear can of worms to me.
williamstein commented 1 week ago

My point is that making the change you suggest here is only reasonable if the entire execution model and design of code execution in markdown documents is completely changed. That could be a lot of work, but on the other hand, I never even documented what the model is, so it's possible to change it. It is almost reactive (like Pluto!) and nothing like the Jupyter execution model.