Open paddyroddy opened 1 year ago
https://github.com/pydantic/pytest-examples Does something similar.
I'm interested in trying to implement this. Looking at the projects listed above, it seems like code blocks are determined via Regex. Do we want to copy that approach or use a more robust Markdown parser (pulldown-cmark seems like a popular one)?
I would vote to use a markdown parser for robustness as you suggested.
Using a markdown parser makes sense from my view. It may be worth to also consider tree-sitter because it supports many languages.
The part that's unclear to me how we want to solve it in the short term is the mapping of column and line numbers in the Message
emitters. Ruff supports Jupyter-notebooks today, but the line-mapping is somewhat "hacked-in" to make it work, and I'm a bit reluctant to add more exceptions to the line-number mapping.
I assume a similar mapping will be necessary for Markdown files because we only pass the code block's source to Ruff, but we should show users the absolute line number from the start of the Markdown document.
Long-term, we'll need to support the following two features.
Ruff should support linting different file types. For example, Ruff should be able to lint SQL, python, and Markdown files. The filetype decides which specific linter Ruff uses to lint the file. This includes that the LSP is in sync with the file-types supported by the CLI and what extensions map to which languages. Rome supports this today.
The idea is that a document written in one language can contain code written in another language. Examples are:
The markdown file handler would recognize code blocks and delegate to Ruff to decide how to parse, lint, and format the code block's content. Ruff's infrastructure would correctly map the line-numbers between the "virtual" documents and the "physical" documents on disk.
Yup, agree with all of your points @MichaReiser. As part of the design phase I've been thinking about how to generalize to different filetypes. It's an interesting problem - I will take a look at Treesitter parsers as well. Going down that route, we could support different languages relatively easily. I am going to submit PRs for this incrementally so that we can iterate over the design. The first one will be a parser that pulls code blocks out of Markdown - I'm leaning towards using Treesitter since that architecture could make it easier to provide "plugins" for other languages in the future.
As far as the line mappings, I was planning on making a generic LineMap struct (or something along those lines) which would contain line numbers mapped to their offset in the containing document. That struct could then be reused for embedded languages, like you mentioned, mapping relative line numbers in the chunk of interest to the absolute line number of the document. It should provide a flexible way to treat code blocks differently, regardless of if it's an embedded language, a markdown code block, a Jupyter cell, etc.
I'll have to look into Messages and how they work before finalizing the code though. I've been reading through the Jupyter implementation as part of this.
As far as the line mappings, I was planning on making a generic LineMap struct (or something along those lines) which would contain line numbers mapped to their offset in the containing document.
Mapping line numbers should suffice for markdown documents but wouldn't be sufficient for e.g. SQL in python. I think it may actually be sufficient to simply add the byte offset of the markdown block to the range of every message/diagnostic.
Yup, that's basically what I'm doing - adding a document_offset
field to Messages and Diagnostics.
As far as the SQL in Python example, I was thinking the approach could work because the document_offset
would be where the SQL code starts relative to the Python file. Line 1 of the SQL code could be on line 50 of the overarching .py file, so the document_offset
for all diagnostics/messages in that block would be 50. Do you think there are flaws in that approach?
E: sorry, not mapping line numbers. Mapping byte offsets between SQL/Python lines. I still need to figure out a concrete design though, if you couldn't tell 😅
Going with a treesitter integration requires installing the treesitter grammar for the language in question. While we could gate this behind a workspace feature
flag, I'd like some other folk's thoughts on it. I don't want to needlessly increase the binary size. I need to figure out how it all works to give exact an estimate of the increase, though.
Advantage of pulldown-cmark
as mentioned above is it's quite fast and won't affect the binary size. Disadvantage is that it obviously can't be used for anything except Markdown.
@MichaReiser @JonathanPlasse do you guys have any thoughts?
It seems easier to first handle only markdown and in a second time expand with the tree-sitter grammar.
Yup, that's basically what I'm doing - adding a document_offset field to Messages and Diagnostics.
Which use cases require a diagnostic or message to know its document offset? Would it be sufficient to mutate the diagnostic.range
or message.range
directly by adding the offset? I'm asking because increasing the size of Message
and Diagnostic
decrease performance because:
Advantage of pulldown-cmark as mentioned above is it's quite fast and won't affect the binary size.
My expectation is that pulling in pulldown-cmark
increases the binary size because we include a new crate in our binary (that is 116 kB in size), or is pulldown-cmark already a (runtime) dependency ?
Going with a treesitter integration requires installing the treesitter grammar for the language in question.
Yeah, using treesitter is more complicated because it requires a custom build step. I'm not too opinionated on if we should use treesitter or not. I think we can start prototyping with either and defer the decision to later.
Would it be sufficient to mutate the diagnostic.range or message.range directly by adding the offset?
Yep! I was working on it yesterday and that's also the route I've chosen to go down. Sorry for the confusion, I should've waiting until I had a fully fleshed out design before weighing in.
My expectation is that pulling in pulldown-cmark increases the binary size because we include a new crate in our binary (that is 116 kB in size), or is pulldown-cmark already a (runtime) dependency ?
It's not a runtime dependency, so it'll increase the crate because of its size, yes. I meant that it'd be less than the size delta from installing treesitter and associated grammar(s) which aren't totally necessary at this stage.
I think we can start prototyping with either and defer the decision to later.
Cool!
Listing down some of the issues I see after hacking around locally:
Path
. As such, we have to rewrite lint_fix
, lint_only
, etc. to accept some sort of CodeBlock
struct. Additionally, we have to figure out how to aggregate the diagnostics and display them/their fixes. If we don't want to treat them as discrete units, the code will be much simpler (almost akin to the Jupyter implementation).CodeBlock
structure, at minimum, should contain the block's content and its offset from the top of the file (what I'm calling its "global offset"). Each diagnostic created while linting the block should have the code block's global offset added to it, meaning that when we display the line to users, they see the actual Markdown line number.UniversalNewLines
to take Markdown line breaks as well, though again, I'm not sure how that works.Sources
. I keep going back and forth on what the trait should entail, or if there should even be one.Pinging @MichaReiser @charliermarsh just in case you guys have time to weigh in. I think the bare minimum question we need to answer is if we want to treat the code blocks as discrete units - I think we should, but that will require quite a bit of work.
P.S. sorry for the poor issue hygiene with the multiple closed PRs. I've been hacking around and didn't realize they're all on here.
That's helpful knowledge that you gained with your prototype. Thanks for working on it.
I want to throw in two more use cases:
The conclusion I'm coming too is that it's probably worth abstracting over the file types instead where Ruff provides a lint
, fix
, format
etc. functions per file type. I don't know what the exact signature would be. These file type specific functions can then pre-process the input and e.g. call into other linters. For example, the markdown linter can call out to the python linter if it finds a python code block and then post-process the diagnostics, to patch up the line numbers.
The way this would work for SQL is that the python linter calls the SQL linter if it finds a SQL expression when traversing the AST. This is inspired by Prettier's approach where Prettier detects template-literals that are tagged with graphql
(this is probably not valid syntax, I don't remember all the details):
let result = useQuery(graphql`
query myQuery() {
}
`);
Rome implements something very close to this design. Each language supports different capabilities (we may support linting SQL but not formatting). How these capabilities are implemented is transparent to the CLI. All the CLI cares about is that there's a lint
function to call and it knows how to lint the files content.
Definition of the JavaScript file type:
File agnostic API:
@evanrittenhouse I was definitely considering them as discrete units when I raised this. For example here https://github.com/astro-informatics/sleplet/blob/c510795e7ebd91000b8afe53276e522b75cdfbb6/.github/workflows/examples.yml#L33 I use pytest-codeblocks to run some example code blocks (essentially acting as another type of test).
@MichaReiser I agree with the file-agnostic hot path that calls into adapters for different languages. I think the approach you propose sort of mimics LSPs in that each language can have different capabilities, implemented differently. I'd love to work on this, but I think that there will be a lot of design decisions that you guys probably want to weigh in on. Is it even worth continuing work on this before we can have those talks?
I think the approach you propose sort of mimics LSPs in that each language can have different capabilities, implemented differently.
That's a neat comparison!
Is it even worth continuing work on this before we can have those talks?
I'm not sure. What I outlined above is also only what I considered doing. There isn't any alignment on the team. Let's maybe first finish the fix refactor. That would allow me to focus on one side project only.
Chiming in here to ask if the development of this feature can also take Quarto (.qmd
) into consideration. It's a great project aimed at providing a plaintext equivalent of Jupyter notebooks.
Hopefully, adding support shouldn't be too complicated because, like (vanilla) Markdown, .qmd
files denote Python blocks explicitly, albeit with a slightly different syntax:
# This is a Markdown heading
This is regular text.
```{python}
#| echo: false
# This is a Python comment
print("Hello, Ruff!")
I don't imagine Ruff necessarily cares about specially formatted comments (though maybe it'll become relevant if you add formatting as a core functionality, #1904), but the `#| echo: false` at the top is a Quarto-specific comment that controls its behaviour. See [here](https://quarto.org/docs/computations/execution-options.html#output-options).
Unlike Markdown, Quarto documents are equivalent to Jupyter notebooks so you can (hopefully?) remix the work being done to add Jupyter and Markdown support.
FWIW from my prototypes the Markdown parser will involve large changes to the hot path, so it's a ways down the road. This is because each code block creates a "context" independent of every other code block (vs. a normal Python/Jupyter file where the context is carried throughout the file). The current backend is tied to the assumption that one file contains one context.
If Quarto involves carrying the context throughout the file, it's probably closer to a Jupyter implementation than a (theoretical) Markdown one. May be worth raising a separate feature request, depending on the work, but sounds pretty cool!
cc @dhruvmanila (who implemented Jupyter support and is on the core team)
for sanity it might make absolute sense to consider each markdown fragment a "own file" with a offset starting point
this capability might actually make it easier to make IDE/language server helpers, where one might want to reformat a function/class alone while editing a file (and the reformatting would basically work off a initial line/column/Position offset, plus the plain text of the lines intended for reformatting)
(im coming from https://github.com/entangled/entangled.py/issues/5)
Formatting markdown code blocks is supported in v0.1.8: https://github.com/astral-sh/ruff/pull/9030
tnx for the ping :+1:
[tool.ruff.format]
docstring-code-format = true
Formatting markdown code blocks is supported in v0.1.8: #9030
I believe that it is in the docstrings of functions as opposed to true code blocks in markdown that this issue was about
@evanrittenhouse Regarding the "descrete units", for both Quarto notebooks (#6140) and Jupyter notebooks exported to markdown with Jupytext (#8800), all the code blocks are considered a single context, similar to the "normal Python/Jupyter file where the context is carried throughout the file" that you mentioned.
Embedded Languages
The idea is that a document written in one language can contain code written in another language. Examples are:
- Code blocks in markdown documents
- SQL in Python code
- Python code in Jupyter Notebooks
I'd like to mention HTML as well. Although that would require configuring a selector to find the node element responsible for containing Python code. That's because there is no established standards. The lang
attribute exists, but it is somewhat out of spec to use it for non human languages (or more specifically, non-valid BCP 47 language tag). Most common solution is to use a custom class for styling anyway (<div class="language-python">
), or a custom element entirely (<myapp-code-viewer language="python">
).
Anyway I don't think that should come before Markdown is completely figured out. I just wanted to weigh in on the aspect of supporting Python code blocks in other languages in a generic way.
I have created a tool which enables this use case: doccmd
.
For example, you might run:
$ doccmd --language=python --no-pad-file --command="ruff format" README.md CHANGELOG.rst
$ doccmd --language=python --command="ruff check" README.md CHANGELOG.rst
It is new and I'm very open to feedback on it.
By default, doccmd
basically adds newlines to the beginning of the extracted code block to make line numbers in errors match the documentation file. This doesn't work for ruff format
which expects to not have a bunch of newlines at the start, so there is the --no-pad-file
option.
I'm also using it to run mypy
, pyright
, vulture
, interrogate
... on my documentation.
It would be cool to have the functionality to run
ruff
overpython
codeblocks like this does forblack
https://github.com/adamchainz/blacken-docs