Open david-a-wheeler opened 3 years ago
@david-a-wheeler assigning to you if you don't mind.
Libraries using dependency "ranges" is indeed not only a valid use case, but in most cases desirable in order to:
When something is not a library (i.e. it's an application), I do believe they should pin versions - but not necessarily for security reasons. And I should also point out that a lot of people really resist this, so we try not to be too opinionated.
If a project commits a lock file then I think that covers most of the security risk. Certainly if I had to choose between a project pinning their direct dependencies in package.json
versus locking their entire dependency tree in package-lock.json
then I'd choose the latter.
Considerations:
I think the answer is probably:
In other words, if you're a library with one dependency foo@1.x
, there should be no recommendation that you pin that to an exact version. However, if the latest version of foo
is 2.2.0
then there should be a recommendation for your library to support either 1.x || 2.x
or 2.x
only.
I'm not certainly about whether to urge a library to support lock files or not. There is certainly very split opinion from open source maintainers who don't want the noise/hassle of keeping lock files updated when they manage dozens or hundreds or packages - even if automation is possible using e.g. Renovate.
Applications should have a lock file IMO, although that still doesn't protect consumers of any application with dependencies at install time because lock files are not published to e.g. npmjs. The only exception was the rarely used and mostly disliked npm shrinkwrap.
In other words the threat vector is:
someapp
exists with transitive dependenciessomedep
gets exploited and publishes a malicious version somedep@1.0.1
The above would mean that anyone running npm i -g someapp
would end up with somedep@1.0.1
almost immediately after it's published.
Thanks for reaching out. We need some external feedback and validation!
Libraries using dependency "ranges" is indeed not only a valid use case, but in most cases desirable in order to:
- Reduce duplicate versions of the same library in ecosystems which do allow duplicates (e.g. npm)
- Avoid conflicting versions of the same library in ecosystems which don't allow duplicates (e.g. maven)
When something is not a library (i.e. it's an application), I do believe they should pin versions - but not necessarily for security reasons. And I should also point out that a lot of people really resist this, so we try not to be too opinionated.
Our doc https://github.com/ossf/scorecard/blob/main/docs/checks.md#pinned-dependencies says the same for applications. However, our code has not caught up with the recommendation yet - it looks for lock files https://github.com/ossf/scorecard/blob/main/checks/pinned_dependencies.go#L763 which typically means pinning by hash.
We currently advise pinning container images by hash, but pinning applications by versions. I'm not sure why we treat containers and package dependencies differently.
@david-a-wheeler do you remember the reasoning?
If a project commits a lock file then I think that covers most of the security risk. Certainly if I had to choose between a project pinning their direct dependencies in
package.json
versus locking their entire dependency tree inpackage-lock.json
then I'd choose the latter.Considerations:
- Can we reliably detect if something is an application vs a library? In Renovate we attempt this, and our "auto" rangeStrategy setting will then pin dependencies if detected as an application
It seems in certain cases a package can contain both an application and a module, e.g. package.json
's main
and bin
. How does npm handle the presence of a lock file in this case? It applies it to both main
and bin
?
- What should be OSSF's recommendations for libraries in particular, which I also assume will be the majority of open source packages by count?
I think the answer is probably:
- Libraries should not be urged to pin dependencies
I think this is in line with our doc.
- Libraries using ranges should ideally support the latest release, because sometimes vulnerability fixes in upstream dependencies are only released to latest
we don't have this right now in our doc. Thanks for the suggestion.
In other words, if you're a library with one dependency
foo@1.x
, there should be no recommendation that you pin that to an exact version. However, if the latest version offoo
is2.2.0
then there should be a recommendation for your library to support either1.x || 2.x
or2.x
only.I'm not certainly about whether to urge a library to support lock files or not. There is certainly very split opinion from open source maintainers who don't want the noise/hassle of keeping lock files updated when they manage dozens or hundreds or packages - even if automation is possible using e.g. Renovate.
Applications should have a lock file IMO, although that still doesn't protect consumers of any application with dependencies at install time because lock files are not published to e.g. npmjs.
Is it fair to say this is a gap in the toolchain? Something we should recommend to package managers' maintainers?
The only exception was the rarely used and mostly disliked npm shrinkwrap.
We currently don't differentiate between shrinkwrap and standard lock files in the implementation or in the documentation. Is it worth recommending this file since it covers a stronger threat model (users of the package will get the same dependencies's versions)?
In other words the threat vector is:
- An open source application
someapp
exists with transitive dependencies- One or more of those transitive dependencies allows ranges
- One of those ranged transitive dependencies
somedep
gets exploited and publishes a malicious versionsomedep@1.0.1
The above would mean that anyone running
npm i -g someapp
would end up withsomedep@1.0.1
almost immediately after it's published.
Would it be benefitcial to get more stakeholders involved in this discussion and write a recommendation doc with all nuances clarified (for each package manager/language, for libs vs applications) present it at the OSSF Best practices for open source developers
and then implement/document it in scorecard?
@rarkins wdut?
@david-a-wheeler do you have a contact from dependabot we may loop in for this discussion?
Stale issue message
text has been updated in our docs. We no longer check for package managers' lock file. We're thinking of a more generic solution that relies on package manager's features to enforce pinning: pip's requires-hashes
, npm's ci
, etc
@david-a-wheeler — can you take a look at the documentation at https://github.com/ossf/scorecard/blob/956d7c389516f70ba83b826d5e192546f0e04d44/docs/checks.md and let us know if that's sufficient?
The Dependency-Update-Tool text seems to assume that only applications are considered.
If a library uses pinned dependencies then this text also makes sense. However, libraries generally aren't pinned, so "updating dependencies" doesn't make as much sense. A variant of this text might make sense, e.g., libraries shouldn't force their users to use known-vulnerable libraries, especially after some grace period. However, that's more complicated to word & it's not clear to me that current tools can do this well (other than noticing when a library forbids the use of later versions).
This probably needs more careful wording to deal correctly with libraries.