jupyter / enhancement-proposals

Enhancement proposals for the Jupyter Ecosystem
https://jupyter.org/enhancement-proposals
BSD 3-Clause "New" or "Revised" License
115 stars 65 forks source link

Markdown based notebooks #103

Open avli opened 1 year ago

avli commented 1 year ago

This PR is an outcome of Jupyter Notebook workshop. The JEP proposes an alternative Markdown-based serialization syntax for Jupyter notebooks that allows the lossless serialization from/to .ipynb, is reasonably human readable, interoperable with standard text tools, and is more VCS-friendly.

Creating a GitHub issue to decide if it's a JEP in this repository is skipped after discussing it with @fcollonval during the workshop.

Resolve #102

jgm commented 1 year ago

Note that the syntax

```{jupyter.code-cell}

is incompatible with pandoc's markdown. Ideally, it would be nice if the proposed format could be read and processed by pandoc (and thus doesn't require a custom parser).

Why not use an attribute that is compatible? E.g.

{.jupyter .code-cell}

or

{.jupyter-code-cell}

or even just

{.code-cell}

There is currently no official attribute syntax for commonmark, but if this comes it is likely to be very similar to the pandoc attribute syntax.

See https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/attributes.md

Similar remarks for other uses of {jupyter.XXX}.

stevejpurves commented 1 year ago

thanks for the comments @jgm

yes, the syntax

```{jupyter.code-cell}

is aimed at providing concrete "directives" in the document that can be used to specify the various notebook blocks, which go beyond code blocks and also specify output and attachments and other complex/rich types.

So the JEP isn't favoring any existing parser/library and while it isn't current compatible out of the box with pandoc it's also not with jupytext, myst or quarto out of the box -- although the syntax currently shares a lot with quarto and myst styles.

A custom parser / serializer or modifications existing parsers are probably going to be needed anyways in order to support the serialisation requirements around output and attachment blocks?

jgm commented 1 year ago

Yes, I understand the intent. But that intent can be met without departing from standard attribute syntax.

If you used one of the variants I suggested, or e.g. {.jupyter:code-cell}, which also works, then you'd be able to read one of these md notebooks with pandoc and process it with filters.

With your current syntax suggestion, that wouldn't be possible; you'd be giving up easy-interoperability for no good reason that I can discern.

jgm commented 1 year ago

A custom parser / serializer or modifications existing parsers are probably going to be needed anyways in order to support the serialisation requirements around output and attachment blocks?

This could all be handled with filters with the existing pandoc markdown or extended commonmark parser; none of it requires changes to the parser.

nthiery commented 1 year ago

Thanks @jgm for the feedback! The motivation for having jupyter somewhere is for namespacing. Other than this, we certainly should consider variants of the proposed syntax if this helps interoperability and increases the odds of being consistent with whatever standards may emerge in the Markdown world.

Using .jupyter.code instead jupyter.code seems totally fine to me.

I am not sure about .jupyter .code: on the one hand, it's consistent with the .code keyword of pandoc. On the other hand it carries less the idea of namespacing.

Presumably a good guideline to follow is what would be customary in the css world. I am by far not an expert there!

jonsequitur commented 1 year ago

Using Markdown for notebooks that display nicely as READMEs (similar to https://github.com/mwouts/jupytext/issues/220) has been explored for Polyglot Notebooks / Try .NET. One detail from that design that might be of interest here is that we also put cell metadata after the code fence, but always prefixed with the language name in order to leverage existing syntax highlighting features.

Here's an example:

```python {metadata: ...}
x = 1
if x == 1:
    # indented four spaces
    print("x is 1.")

This renders with language-specific highlighting without displaying the metadata:

```python  {metadata: ...}
x = 1
if x == 1:
    # indented four spaces
    print("x is 1.")
jgm commented 1 year ago

Using .jupyter.code instead jupyter.code seems totally fine to me.

Some implementations may take .jupyter.code to be specifying two class names rather than one (and thus to be equivalent to .jupyter .code). And in general, even if implementations supported it, having . or : in class names is not ideal. (Colons need to be escaped in CSS, and periods conflict with the class syntax.)

.jupyter-code or .jupyter_code should be fine.

Another alternative would be to use a key-value pair: jupyter="code", jupyter="output", etc.

nthiery commented 1 year ago

One detail from that design that might be of interest here is that we also put cell metadata after the code fence, but always prefixed with the language name in order to leverage existing syntax highlighting features.

Thanks for the feedback that brings perspective to one of the open points.

I personally lean toward making this a recommended feature: parsers should support it; writers (including humans!) are encouraged to use it, but don't have to depending on the use case.

stevejpurves commented 1 year ago

On the class attribute syntax: I don't like the idea of syntax that overloads class attribute; {.code-cell} essentially equates to <div class="code-cell"></div> whereas {code-cell} essentially equates to <code></code>, which is semantically stronger.

Speaking from a jupyter point of view, I think we want strong semantics around what a jupyter code-cell (or output, or attachment) is (with or without the {}), and what information should be on them in terms of parameters, attributes, metadata, etc... these are not <div>s of a certain class they are semantically meaningful elements with a specific representation when serialized and are rendered as complex UI fragments in jupyter clients.

On interoperability: A block syntax of {code-cell} is already compatible with jupytext and MyST notebooks, with the introduction of new block types and the jupyter namespace {jupyter.code-cell} it is still well aligned with the block / directive syntax used by jupytext, myst and I think quarto - extension should be straightforward there.

jgm commented 1 year ago

A block syntax of {code-cell} is already compatible with jupytext and MyST notebooks

My point is that you should care about wider interoperability.

I think quarto - extension should be straightforward there.

Quarto is based on pandoc (it uses pandoc's parsers with a bunch of filters on top to process the AST), so you need to be interoperable with pandoc for that.

stevejpurves commented 1 year ago

A block syntax of {code-cell} is already compatible with jupytext and MyST notebooks

My point is that you should care about wider interoperability.

I think we do? and I think we're considering & discussing that here -- I guess what I'm not clear on is as there are multiple possible (probably conflicting) tools to be interoperable with, how to weight them. e.g. I'm not clear on the extent that pandoc is actively used alongside jupyter in the same way that jupytext is (i.e. in a tight loop over notebook development and execution) as opposed to say getting notebooks out to other formats for distribution of that material outside of jupyter.

Also other big point on interoperability is which hasn't been mentioned yet is GFM!

Maybe what were are missing the JEP so far are some clearer requirements like statements that can be discussed and agreed on, e.g.

As currently the "design goals" section is the closest to something like that but is still very loose: i.e. "The serialized notebook should be a valid Markdown file." whatever that means. This could better set the scene for then zeroing on the syntax.

Quarto is based on pandoc (it uses pandoc's parsers with a bunch of filters on top to process the AST), so you need to be interoperable with pandoc for that.

Ah ok, I thought it was pandoc flavored markdown + additional extensions -- are you saying that pandoc already supports the quarto code block syntax, which doesn't use class attributes and is close to the syntax already outlined in the JEP? or is this special handling of a language attribute by pandoc?

image

e.g. shown here

jgm commented 1 year ago

I suspect that's a documentation bug. Pandoc allows

``` {.python}

or

``` python

I believe the same is true of Quarto, because they don't use a customized pandoc, just filters on top.

All I'm saying is that if there's any room for a choice between

{.jupyter-code}
{.jupyter:code}
{.jupyter.code}
{jupyter .code}
{jupyter-code}

etc., it would be desirable (in this planning stage) to pick one that pandoc can already handle. This increases interoperability at little cost. (This would have been a good design goal for MyST, too.)

krassowski commented 1 year ago

I would love to see a new section addressing the topic of trust and signatures (Jupyter Notebook security model). In particular: would signature for notebook be computed and stored in the markdown file?

Please also see https://github.com/jupyter/enhancement-proposals/issues/95#issuecomment-1501176251.

westurner commented 1 year ago
avli commented 1 year ago

@krassowski, thank you for raising this question!

As far as I understand from the documentation, the signature is produced from the outputs. Can we apply the same procedure to the outputs inside the Markdown file?

Most likely, I oversimplify things, and you probably see some rough edges. If so, could you share your thoughts?

echarles commented 1 year ago

cell outputs and attachment are mentioned at several places, but it is not clear to me if there is an option to have a companion file to markdown to persist those cell outputs and attachments.

nthiery commented 1 year ago

cell outputs and attachment are mentioned at several places, but it is not clear to me if there is an option to have a companion file to markdown to persist those cell outputs and attachments.

Thanks for your feedback. Externalising cell outputs and attachments (e.g. in companion files) is indeed a natural feature. During our discussions, various use cases and approaches emerged. For an incremental approach, and also because the feature could be relevant as well for traditional ipynb notebooks, we decided to propose to treat that feature in a followup JEP. See line 580 of:

https://github.com/jupyter/enhancement-proposals/pull/103/files#diff-932448845fb9d55aef27789043a371eb872aa644507bf72e049f5ab536428238R580

With the current JEP, cell outputs and attachemnts can be stored inline only, or not at all.

echarles commented 1 year ago

in a followup JEP.

Well, I would feel more comfortable that this important topic be handled in this JEP to make sure all bits make sense. It can make sense to discuss them in separate forums, but giving my +1 on a partial solution which excludes difficult aspects is not appealing to me.

See line 580

oh yes, it was indeed excluded.

westurner commented 1 year ago

.

nthiery commented 1 year ago

Well, I would feel more comfortable that this important topic be handled in this JEP to make sure all bits make sense. It can make sense to discuss them in separate forums, but giving my +1 on a partial solution which excludes difficult aspects is not appealing to me.

Thanks for giving us the opportunity to detail and clarify our reasoning.

In the use cases we had in mind, the feature did not look difficult, at least when it comes to the notebook format itself: one simple solution is to enable metadata for cell outputs and for attachements specifying that the data is not provided inline, but to be fetched from a given url.

The feature is relevant for both Markdown and ipynb notebooks, and the above implementation does not depend on the format.

Of course, that's not all there is to it to externalizing data -- like how you make sure, e.g., that companion files remain available or urls remain valid when the notebook is moved around -- but these difficulties are about tools and workflows, not the file format of the notebook.

Does that sound adequate in the use cases you have in mind?

echarles commented 1 year ago

how you make sure, e.g., that companion files remain available or urls remain valid when the notebook is moved around -- but these difficulties are about tools and workflows, not the file format of the notebook.

Keeping the companion file with its host is one aspect which is indeed not directly relevant to the file format.

My attention point was more about the cell id. With ipynb it a cell has id, input and output all together under a json stanza. It is easy to update them all at the same time. With a companion file, you completely loose that single structure and something on top needs to keep things in sync. Think to cell deletion, insertion, split...: al that will mutate the cell ids in ways that need to be reflected in the companion file. You will reply that this is also part of the tools and workflow, which I would agree, but I don't see in the format definition the concept of cell id (or code block id), nor the requirements that are put to the tooling developers to ensure users are safe while editing the content. In other words, this JEP should define that the proposed format will be indeed usable and will support companion files in any way.

willingc commented 1 year ago

I have mixed feelings on the format proposed for a few reasons:

  1. The JEP should have a section on "How we communicate to the broader community" if the proposed changes are adopted. This is really important from a messaging standpoint for role of .ipynb format going forward.
  2. While the technical merits seem appealing, will this open the door for further fragmentation of the .ipynb standard for notebooks? While it may not be the most modern approach now, it does, much like PDF (not an ideal technology), serve as a standard for notebook sharing.
fcollonval commented 1 year ago

We have started looking at this at the SSC meetings. We have decided to give at least another 2 weeks of discussion before moving forward.

allefeld commented 11 months ago

I think having a markdown-based alternative format for Jupyter notebooks is a great idea.

But supporting and slightly expanding on the interoperability issues @jgm raised: Just for simplicity's sake I would also suggest to as far as possible use or adapt an existing format, instead of introducing yet another variation.

Since a Quarto qmd file is already a functional alternative representation of a notebook (converted to ipynb for execution and back to md afterwards, including output cell contents), and it is already interoperable with Pandoc, why not build your solution on top of that?

In any case, I think it would be good to actively involve representatives of related projects in this process, e.g. Quarto's @cderv.

echarles commented 11 months ago

Since a Quarto qmd file is already a functional alternative representation of a notebook (converted to ipynb for execution and back to md afterwards, including output cell contents), and it is already interoperable with Pandoc, why not build your solution on top of that?

There as been mention of https://github.com/executablebooks/mystmd here, and I remember having seen public discussions between MyST and Quarto if I am not mistaken. What about targeting interoperability between ipynb and myst and then between myst and qmd?

Around ipynb interoperability, a general question is for me "How related/different would it be to https://github.com/mwouts/jupytext?"

cderv commented 11 months ago

are you saying that pandoc already supports the quarto code block syntax, which doesn't use class attributes and is close to the syntax already outlined in the JEP? or is this special handling of a language attribute by pandoc?

@stevejpurves @jgm Just chiming in to add some precision about this. The syntax of ```{python} is used for executable code blocks which support is brought by Quarto. #| echo: false inside the block (as on the screenshot shared) is a syntax for options to use for execution. So it is a specific Quarto syntax additional to Pandoc's code block syntax ```{.python} or ``` python, but compatible with the Markdown reader.

In Quarto, computation are handled before Pandoc conversion through engine, among them Jupyter engine. Results of computation stages will produce a .md intermediary file with Source Code Blocks and there results as Pandoc's Markdown syntax, to be process with Pandoc.

Hope it helps clarify. Happy to show more if needed.

cscheid commented 11 months ago

I suspect that's a documentation bug. Pandoc allows

``` {.python}

or

``` python

I believe the same is true of Quarto, because they don't use a customized pandoc, just filters on top.

Just to clarify a little bit more on the Quarto side: we switched to a custom Reader since (I believe) Pandoc 3. So we're no longer strictly "just filters on top", so that we wouldn't break backwards compatibility for the very common syntax

```{python}
code block

As @jgm pointed out, that is indeed not valid syntax for codeblock nodes in pure pandoc:

$ pandoc -f markdown -t native

print("hello")

^D [ Para [ Code ( "" , [] , [] ) "{python} print(\"hello\")" ] ]


But in quarto, you get this instead:

$ cat codeblock.qmd

engine: markdown # to avoid the execution of the code

print("hello")

$ quarto render codeblock.qmd -t native -o - pandoc -o /var/folders/nm/m64n9_z9307305n0xtzpp54m0000gn/T/quarto-sessionc91f1714/99369018/548c0fe7.native to: native standalone: true default-image-extension: png

Pandoc Meta { unMeta = fromList [] } [ CodeBlock ( "" , [ "{python}" ] , [] ) "print(\"hello\")" ]


If we request markdown output we don't get _precisely_ the same codeblock, but it's close enough that it roundtrips correctly:

$ quarto render codeblock.qmd -t markdown -o - pandoc -o /var/folders/nm/m64n9_z9307305n0xtzpp54m0000gn/T/quarto-sessiona858c56a/94c20cae/e83363f1.md to: markdown standalone: true default-image-extension: png


toc-title: Table of contents

print("hello")
minrk commented 11 months ago

I do in general think it would be better for everyone if we were to officially adopt (and potentially extend) an existing format, since there are at least three of these now, rather than define another new format for more text-friendly notebook serialization. I think a pretty strong case has to be made that none of these formats can be built on successfully before defining a new format, and I don't feel like that's been done. I'd start from what do myst/quarto/jupytext not do that we need, and how can we fill those gaps (if any) by building on those tools (or not).

allefeld commented 11 months ago

Sorry, I claimed that qmd is Pandoc-interoperable, which it is not exactly, the exception being executable code blocks.

I'm not involved in Quarto development, but I have taken part in discussions on Quarto, and from that I know that there are mid-term plans to implement the initial extraction of code also via Pandoc, which needs a custom reader. @cscheid, I'm not sure whether that custom reader would be identical to the one you mentioned as already being used now? Would that mean that through that custom reader Pandoc would take over the complete work of initial qmdipynb conversion, before calling NBClient? If yes, that might be a good starting point for something like qmd to take over the role of ipynb, i.e. clients supporting the new notebook format could use the same custom reader.

cscheid commented 11 months ago

I'm not involved in Quarto development, but I have taken part in discussions on Quarto, and from that I know that there are mid-term plans to implement the initial extraction of code also via Pandoc, which needs a custom reader.

I'm sorry - I'm not sure what you're referring to here.

westurner commented 11 months ago

A combination of MyST-Markdown (Jupyter-book (Sphinx)) and QMD (Quarto, nbdev) would be a great thing.

jupyter/nbformat does not and should not specify docutils or pandoc.

Additional criteria:

JupyterLab extensions:

Challenges / Opportunities:

allefeld commented 11 months ago

@cscheid:

I'm not involved in Quarto development, but I have taken part in discussions on Quarto, and from that I know that there are mid-term plans to implement the initial extraction of code also via Pandoc, which needs a custom reader.

I'm sorry - I'm not sure what you're referring to here.

I mean the discussion in https://github.com/quarto-dev/quarto-cli/discussions/3330: "first pass of pandoc with a custom writer" https://github.com/quarto-dev/quarto-cli/discussions/3330#discussioncomment-6816143, "we need to add an additional Pandoc pass that happens before engines" https://github.com/quarto-dev/quarto-cli/discussions/3330#discussioncomment-6823919, use "initial Pandoc pass … not just for preprocessing, but to create the ipynb" https://github.com/quarto-dev/quarto-cli/discussions/3330#discussioncomment-6824428 by me, but supported by the following comment, "Definitely agree that we need a parser." https://github.com/quarto-dev/quarto-cli/discussions/3330#discussioncomment-6826869.

cscheid commented 11 months ago

I apologize for further polluting this thread here, but I want to clarify a few points before further confusion sets in.

"first pass of pandoc with a custom writer" https://github.com/quarto-dev/quarto-cli/discussions/3330#discussioncomment-6816143,

Just to clarify for everyone: the user baptiste is not a quarto developer, and neither is allefeld, for other readers in here. Baptiste offering a suggestion and not one we're currently planing on implementing. My full reply was:

we already know we need to add an additional Pandoc pass that happens before engines (so that filters can add code cells that will be executed, and remove code cells as well)

My "remove code cells" comment is not about "extracting code cells" or the Pandoc syntax for code blocks. It is about the ability to identify executable code blocks for processing ahead of the execution engine.

cderv later says:

Definitely agree that we need a parser

In here, the context is that knitr eventually needs a parser in order to be able to detect and handle nested code cells, ultimately reducing the need for hacks like the multiple curly bracket treatment of code cells inside comments.

I appreciate the enthusiasm and energy to participate, but I'd just like to ask folks to try and refrain from stating or implying positions from quarto devs about the quarto project when they lack the appropriate context. If you need more clarification about the goals of the quarto project, please ask us quarto devs directly: that's me, cderv, dragonstyle, jjallaire, and rich-iannone. Thank you!

nthiery commented 11 months ago

Dear all, I am so glad to see active discussions on this JEP! Thanks everyone for contributing. I am under the water for a couple more days, but will provide feedback soon.

krassowski commented 11 months ago

From the standpoint of jupyter-lsp (which does not have an SSC representation), a format which enables encoding:

would be amazing to enable https://github.com/jupyter-lsp/jupyterlab-lsp/issues/467, quoting:

The Julynter experiment #378 demonstrated how notebook-specific IDE features could work. The ideas would include:

  • "empty cell" diagnostic
  • "cells execution not in order" diagnostic (obviously optional)
  • "cell with comments only could be a markdown cell" diagnostic
  • "remove empty cells" action
  • and many others (e.g. "ratio of markdown to code cells")

In order to make the language servers (optionally) cell-aware I propose we embrace the jupytext percent format:

# %% Optional title [cell type] key="value"

e.g. # %% for code cells and # %% [markdown] for markdown cell. We could store the cell execution number in metadata (key="value" part). We would allow the user to disable this feature.

Now, I am not advocating for any specific format, but it would be amazing if a future "go-to" Markdown format supported this kind of metadata in some way.

Note: for the most part such metadata should not be presented to the user, but it would still be valuable to have a way to achieve a full round trip from ipynb to markdown and back.