Closed bmwoodruff closed 6 days ago
I don't care to post a link to the branch. it took about 30 minutes (while distracted) and then I had to run the tests. I missed a few # may vary
tags, which took a bit of extra time to clean up at the end (an extra spin build...), and never checked the docs visually. I'm guessing this would add another half an hour. So for 30 functions, it may be anywhere from 2-3 minutes for function to review. Several didn't end up getting any examples added.
With 829 functions, this could end up being around 2000 minutes (or 30 hours give or take) of human review. With 2 interns working on this, it would take 1 week of work assuming they can go the same speed. I'd guess 2 weeks perhaps of human review, followed by a quicker glance from the tech lead (hopefully 1/4 the time), and maintainer (another 1/2 that, maybe 4 hours). This could add 1500 examples into the codebase.
Building numpy, installing it, and spinning the docs, adds an extra chunk of time for each batch (which can happen in the background).
Right now the bigger issue that I need to deal with is the way that docstrings are handled for aliases, and overwritten using doc_note
and various other classes throughout the codebase. This makes it problematic to algorithmically deal with docstrings.
Description:
I want to know how long it takes for a human to review a module with 30 functions. We need some data point(s) on how much human time it take to review AI Gen examples.
example-post-processing.py
on thenp.linalg
module.tools/example-checker.ipynb
is not required, but may speed things up. I'll avoid that for now.spin lint
which will help identify lines that are too long. Adjust those lines appropriately.Acceptance Criteria: