Closed bhuvy2 closed 5 years ago
This is a solved problem actually. GNU has the utilities style
and diction
that do most of the work. We'd just have to put this info in the pipeline to get some analyses.
No way to do this in general. The tools spit out 100 errors or no errors at all.
Language Barriers are a problem for a lot of students. We have a lot of text on the website that score very low in the readability score. While that many seem like a good this, we are teaching tough material -- it is not. This may be a false analogy, but if Feynman can explain most of his theoretical physics concepts with a high readability, so can we. We should add a stripped down parser to the website that scores the documentation on readability. The reading level should be high school to college undergraduate.
As with all metrics, it won't be perfect, but it'll get us some sort of metric on how students (especially ESL students) cope with reading documentation.
And before we have a discussion about "professional documentation" being terse, we are still an introductory course. We are supposed to ween them off of autograders from previous courses and center them around documentation; not make them read, hard-to-read docs and test you on them (I'm looking at you ECE 391 with your obscure 1990s keyboard buffer questions).