Closed jdkato closed 2 years ago
On a related note, I'm not wild about referring to the readability
or capitalization
checks as "extension points." These feel much less abstract than the others (in reality, they're really modified existence
checks).
This is also relevant to my goal of supporting externally-defined checks that don't directly use one of the extension points (see https://github.com/ValeLint/vale/issues/45#issuecomment-305645683 for details).
Question: Does this "plugin" ignore content in Markdown that does not appear in a doc build? I'm thinking about links and descriptions such as alt text.
In other words, would a page full of links like some word bias the results? IIRC, the Flesch-Kincaid calculations would read bits like the relative path URL as a single (complicated) word.
Example: when I run https://developer.cobalt.io/getting-started/sign-in/ through:
My wild guess: Vale's flesch-kincaid plugin also reads link text in markdown, such as [some word](../path/to/something-complex) as single words, which would increase the score.
Thanks for posting this question, @mjang. (For context: we've been chatting on Slack and spitballing ideas of why the scores differ.)
Another idea: I wonder if the web tools are also counting sidebars and menus. 🤔 Those could distort scores in one direction or another.
Some examples:
Question: Does this "plugin" ignore content in Markdown that does not appear in a doc build? I'm thinking about links and descriptions such as alt text.
Yes -- Vale tries to be as accurate as possible when calculating these metrics. It uses its summary
scope, which strictly follows the formula: (1) it doesn't include non-prose content (links, html tags, source code, front matter, etc.) and (2) only operates on sentence-containing blocks.
There's a few problems with the comparison to WebFX:
Here's an example HTML document (a snippet from gitlab_flow
):
<p>Organizations coming to Git from other version control systems frequently find it hard to develop a productive workflow.
This article describes GitLab flow, which integrates the Git workflow with an issue tracking system.
It offers a transparent and effective way to work with Git:</p>
<pre><code class="language-mermaid">graph LR
subgraph Git workflow
A[Working copy] --> |git add| B[Index]
B --> |git commit| C[Local repository]
C --> |git push| D[Remote repository]
end
</code></pre>
WebFX reports 10 sentences, 68 words, and a Flesch Kincaid Grade Level of 7.2, which is wildly inaccurate.
Vale, on the other hand, internally calculates 3 sentences, 44 words, and a score of 10.78.
Let's break this down:
Sentence 1 [18 words]: Organizations coming to Git from other version control systems frequently find it hard to develop a productive workflow.
Sentence 2 [15 words]: This article describes GitLab flow, which integrates the Git workflow with an issue tracking system.
Sentence 3 [11 words]: It offers a transparent and effective way to work with Git:
Total: 3 sentences, 44 words.
If we pass just the "correct" text to WebFx, it changes its calculations to 3, 44, and 10.2. The score difference is likely from the calculation of "complex words" and syllables, but it's much closer.
I'm reopening this issue because I think it would be useful to add a "View: Readability" option to https://vale-studio.errata.ai/.
To extend the discussion from the Write the Docs slack:
I need to be able to do an "apples to apples" comparison of Flesch-Kincaid scores. And it's at best difficult to apply the Vale plugin to HTML content (Sure, I could pull the source code from external HTML into a repo, but that requires understanding git, repos, and Vale).
So I need to know -- do you have / know of a Web tool that shows consistent results to your Flesch-Kincaid plugin?
I think you have forgot to add this extension in the documentation.
I'm thinking about including a new
readability
extension point that will allow users to set standards for metrics like Flesch-Kincaid, Gunning-Fog, and Coleman-Liau. For example,This would warn about any paragraphs that exceed a reading level of 8th grade.
The
prose
library already supports these metrics, so it's just a matter of deciding on the check implementation details.