Open andrewvaughan opened 1 year ago
The general idea is to find a way to make proselint ignore things other than text segments of files, but we're not quite there yet, since that would require a parser.
I am on holiday right now, but will be happy to discuss further when I return.
The general idea is to find a way to make proselint ignore things other than text segments of files, but we're not quite there yet, since that would require a parser.
I am on holiday right now, but will be happy to discuss further when I return.
Thanks! An interim idea might be to render any markdown found to HTML and run proselint on the rendered file. That way you don't need to worry about creating another parser by "normalizing" around HTML parsing. Since most documentation formats generally have some way of rendering into HTML as a standard, this may simplify the problem for you across many different documentation formats.
Thanks! An interim idea might be to render any markdown found to HTML and run proselint on the rendered file.
While this is an interesting idea, it would be slow, and take some effort. In general, the best solution would be to have another tool pass exclusively text segments of files to proselint somehow. The linting ecosystem would be better off if one parser interpreted the files to begin with and distributed relevant sections to each linter accordingly, I believe. However, in terms of what we can do, I do not see a fix for this happening in the near future unfortunately.
Markdown is almost impossible to check with proselint because of (at least) the following two checks:
typography.symbols.sentence_spacing
(which fails any time you have a table formatted byprettier
, which is another very common linter for Markdown users)leonard.exclamation.30ppm
(as exclamation points are a very common character for markdown files, as they are part of the language standard - e.g., for images)There may be more, these are what I've run into thus far.