When deciding whether to approve a new dependency, I've been informally checking for a few criteria. We should formalize these criteria, so I've put them into this checklist:
[ ] Actively maintained based on recent commit history: (do NOT rely on OpenSSF scorecard, which counts automated commits like from dependabot as signs of activity)
Code changes are secured by one of the following:
[ ] Only a small group of people have commit access:
[ ] Code review is enforced:
[ ] No binary artifacts:
[ ] Any unfixed vulnerabilities reported by GitHub are minor or not relevant:
[ ] Compatible license:
[ ] Reputable maintainer(s): Consider whether they are well-known figures or connected to well-known figures. For example, do they work for a known company? Make sure to verify any connections, e.g. by membership in GitHub organizations or checking company directories. This is very light vetting and is not intended to show that the maintainer is trustworthy. The goal is to convince ourselves that the maintainer has enough "reputation signals" that it'd be unrealistic for a bad actor to build up these signals for malicious purposes.
[ ] Established project: Consider how long the project has been operating for and whether it's widely used. Remember that commit timestamps can be forged! Instead, you can rely on the timestamps on comments in GitHub PR threads.
When deciding whether to approve a new dependency, I've been informally checking for a few criteria. We should formalize these criteria, so I've put them into this checklist:
My intention is that after the colon (
:
) for each ticked box, we document our reasons for ticking the box. The OpenSSF security scorecards are a useful way to check many of these points. Here's an example usage: https://github.com/oppia/oppia/pull/20362#discussion_r1626804621If this process sounds good, I propose putting this into the wiki and copying it into the review comment for each PR adding a dependency.