Closed pauladkisson closed 1 year ago
Oh. It's looks like a GitHub action but does not expose a public pre commit hook
Hooks have to be from here https://pre-commit.com/hooks.html
BUT you could still set this up as a GitHub action using that workflow
EDIT: IDK any more, it looks like it ought to be possible to setup as a custom hook
Wait, this is weird, because the source repo does have the config file: https://github.com/numpy/numpydoc/blob/main/.pre-commit-hooks.yaml
EDIT: IDK any more, it looks like it ought to be possible to setup as a custom hook
Yeah, I was under the impression that pre-commit supported custom hooks.
OK I found the issue
The pre-commit hook was added 3 months ago, after the last official release which you are pinning to
Need to figure out how to use their current main branch as the 'rev' version
Need to figure out how to use their current main branch as the 'rev' version
Got it! Now that it's referencing the latest commit on main
(by SHA) it finally works.
Got it! Now that it's referencing the latest commit on main (by SHA) it finally works.
Great!
That's a nice looking summary report too
Yep, and trying it out on example_datasets/toy_example.py
per our earlier discussion, and it seems to do exactly what we were after. Catches <parameter> : <type>
vs <parameter>:<type>
, enforces .
at the end of all descriptions, etc.
numpydoc validation hook appears to work well, but there are many documentation errors to fix (>>100). I don't really want to merge this until all the docstrings are passing, but I also don't want to fix them all myself bc there are too many.
Tried out aider today to try to leverage GPT to work through this task, but without much success. Using GPT3.5 yielded unreliable results (and some undesirable off-target changes) on the example file that I was testing, and using GPT4 failed due to limited context window even on a moderately sized file (~400 lines).
It may be worthwhile to design our own pre-commit hook to automatically fix (at least some of) the numpydoc_validation errors. This solution could either leverage GPT in a more focused way (only reading each offending docstring and the corresponding error message one at a time) and/or hard-code fixes for the 37 error codes.
But this is a pretty low priority for the project as a whole, and should probably wait until after the more pressing conversions are finished.
Testing numpydoc's validation hook in addition to pydocstyle.