ossf / scorecard

OpenSSF Scorecard - Security health metrics for Open Source
https://scorecard.dev
Apache License 2.0
4.37k stars 480 forks source link

Feature: Allow unpinned in non-privileged workflows #2018

Open laurentsimon opened 2 years ago

laurentsimon commented 2 years ago

Workflows that have no secret and all their permissions set to read/none don't benefit from being pinned, and add burden for users to keep them up to date. We may want to relax the Token-Permission check, making this a "bonus" point rather than flagging it as an problem

laurentsimon commented 2 years ago

/cc @raghavkaul

evverx commented 1 year ago

Workflows that have no secret and all their permissions set to read/none don't benefit from being pinned

They do but it has nothing to do with security. systemd pins its dependencies to make it easier to maintain the CI. For example super-linter broke at some point https://github.com/github/super-linter/issues/2128 and when Dependabot tried to update the action it failed and the last working version was successfully used until the bug was fixed. If it hadn't been pinned the part of the CI relying on super-linter would have been just broken.

Having said that I agree that unpinned read-only actions shouldn't be flagged by analyzers focusing primarily on security and it seems to me that promotional PRs like the one discussed at https://github.com/ossf/scorecard-action/issues/1017 triggered semi-automatically by overzealos "security" tools fixing imaginary "security" issues can be eliminated by addressing this issue. Can it be bumped somehow?

laurentsimon commented 1 year ago

I think we'll try to tackle this in Q1. It's on our radar.

evverx commented 1 year ago

I wonder if there are any updates on this?

(it's related to https://github.com/systemd/systemd/pull/27530)

laurentsimon commented 1 year ago

@diogoteles08 is working on it afaik

evverx commented 1 year ago

Good to know. Thanks!

It seems it should cover CIFuzz (at least as long as it doesn't write SARIF). I wonder if ClusterFuzzLite can pass this check? It comes with unpinned docker images.

diogoteles08 commented 1 year ago

Hi there! Yes, I'll be working on this issue. Expected to end of Q1.

hartwork commented 1 year ago

I would like to emphasize that even for read-only actions, a gone-malicious action could:

So if the scorecard de-penalizes read-only actions, it closes their eyes to all of these scenarios. Is that a good idea? I would personally expect to at least require that the action does not use any secrets (and check for that) to be considered safe without pinning, just my 2 cents.

Am I missing something in this picture?

CC @hugovk @joycebrum

PS: https://github.com/step-security/harden-runner tries to go the exact opposite way, in case someone is interested.

evverx commented 1 year ago

modify build artifacts

I wonder how it can do this if it doesn't have contents: write? Or do you mean actions that upload stuff to, say, PyPI?

leak CI secrets

They can leak secrets but I don't think pinning stuff to hashes can help much here given that under the hood a lot of actions use a bunch of unpinned modules nobody has ever reviewed. PRs where Dependabot updates stuff aren't reviewed either.

release/deploy/publish malicious code

They can do that too and pinning stuff doesn't help here either.

https://github.com/step-security/harden-runner tries to go the exact opposite way, in case someone is interested

I took a look at it https://github.com/systemd/systemd/pull/25205#issuecomment-1321103480 so personally I'm not interested in that.

All in all I think instead of that security theater scorecard should ideally actually audit/scan actions. It was discussed in https://github.com/ossf/scorecard/issues/3022#issuecomment-1551330301 and https://github.com/ossf/scorecard/issues/2991#issuecomment-1544477217.

hartwork commented 1 year ago

@evverx pinning makes sure that the same action code will be run every time to allow a review-once-run-multiple-times kind of workflow. That's the problem pinning solves — it makes changes visible and controlled — nothing more. That should answer 99% of your question. By build artifact I meant files uploaded via action actions/upload-artifact, which (unlike PyPI) does not require any additional secrets.

evverx commented 1 year ago

That should answer 99% of your question

I didn't ask any question regarding pinning stuff. I know what it is and what it's supposed to accomplish.

By build artifact I meant files uploaded via action actions/upload-artifact, which (unlike PyPI) does not require any additional secrets

It's a well-known GitHub bug: https://github.com/google/clusterfuzzlite/issues/84#issuecomment-1033683135. It should be fixed by GitHub.

hartwork commented 1 year ago

That should answer 99% of your question

I didn't ask any question regarding pinning stuff. I know what it is and what it's supposed to accomplish.

@evverx if the the use of pinning is clear, what exactly is your motive and role in this discussion then?

By build artifact I meant files uploaded via action actions/upload-artifact, which (unlike PyPI) does not require any additional secrets

It's a well-known GitHub bug: google/clusterfuzzlite#84 (comment). It should be fixed by GitHub.

That's arguable, but an interesting point.

evverx commented 1 year ago

what exactly is your motive and role in this discussion then?

https://github.com/ossf/scorecard/issues/2018#issuecomment-1324954369

That's arguable, but an interesting point.

I'm not sure why stuff is arguable when GitHub bugs and shortcomings come up :-)

hartwork commented 1 year ago

what exactly is your motive and role in this discussion then?

#2018 (comment)

@evverx either you acknowledge the security aspect of pinning for even read-only actions or you don't. If you do, to keep penalizing is in your interest, which is what I was arguing for. Excellent.

evverx commented 1 year ago

either you acknowledge the security aspect of pinning for even read-only actions or you don't

I can acknowledge that but it doesn't mean that false positives shouldn't be fixed. And this issue is about false positives.

If you do, to keep penalizing is in your interest

It isn't. I'm not interested in receiving bogus promotional "security" PRs fixing imaginary "security" issues. This issue is supposed to address false positives.

hartwork commented 1 year ago

@evverx I see. I'm not sure I mind false positives in this context, I support more hardening. Your point is clear then, thanks.

evverx commented 1 year ago

I support more hardening

FWIW I do too. The problem is that even though I would love to pin CIFuzz and ClusterFuzzLite they can't be pinned. Another way to address this would be https://github.com/ossf/scorecard/issues/1907 but that would waste maintainer's time, wouldn't be visible outside of repositories and wouldn't prevent PRs trying to fix them.

Anyway the reason I'm annoyed is that because thanks to all that automation I somehow end up receiving bogus PRs, bogus CVEs generated automatically based on OSV (with inaccurate commit ranges) and so on and all that can be avoided by addressing false positives.

diogoteles08 commented 1 year ago

Hi!

@evverx I understand your complains against False Positives and annoying notification on GitHub and we are taking them seriously! As you said, this current issue intends to reduce False Positives.

About the idea of enhancing Scorecards have some kind of feature to scan/audit actions, I also think it's a good idea and it's in my radar, but it's still a bit far from Scorecard current approach and might take longer to became feasible. By now I believe it's more reasonable to focus on short-term fixes to avoid False Positives and enhance the UX of Scorecard for Maintainers.

diogoteles08 commented 1 year ago

Thanks for the input, @hartwork.

To answer your main complains, for this issue we are already considering that unpinned actions would be allowed (i.e. not cause a score decrease at the Pinned-Dependencies check) only if the action:

  1. Is not called with any secret
  2. Is not called with any "dangerous permission"

But I also believe that hash-pinning actions could bring security benefits even when used with read-only permissions. I tried to compile my concerns as follows:

  1. As I raised on the issue #3131, security-events: read can be dangerous. So we might want to define it as a "dangerous permission" and keep punishing unpinned actions that use this permission.
  2. As @hartwork pointed out, actions with read permissions can still modify build artifacts released with actions/upload-artifact. For this I'd like to hear other opinions, because I don't know if those are ever used for "publishing" or anything critical.
  3. Contributing to a general idea of hash-pinning being a good practice, because it also prevents bugs or malfunctions not directly related to security, but that eventually could be abused. One can also argue that leaving a action unpinned can be dangerous for the case someone changes the permission to a write permission and doesn't remember to pin the action (scorecard should point that out, though)

Given those points, we could debate if we want to keep a light score reduction even for the "allowed cases" of unpinned actions discussed in this issue. Maybe something like a 0.1 score reduction for each occurrence? With this setup, a repository that prefers to keep "safe actions" unpinned won't have a 10/10, but should still get something around 9/10, which is already a pretty great score.

Would like to hear @maintainers input on this. @laurentsimon @raghavkaul

joycebrum commented 1 year ago

I really liked this idea of "punishing but not much" because it would also differ the case of a "safe workflow unpinned" from a "potentially dangerous workflow unpinned".

Good idea @diogoteles08 !

evverx commented 1 year ago

enhance the UX of Scorecard for Maintainers

I agree that it's much more important. The CLOMonitor folks and the dashboards they built is the only reason why I didn't threw out the scorecard action. (though it's still weird to me that it was somehow totally acceptable to just dump json)

this current issue intends to reduce False Positives

I completely forgot that at least in the systemd repository CIFuzz is no longer read-only: https://github.com/systemd/systemd/pull/27501 so I guess at least there it's too late :-) My opinion on the security tab and the ability to read it can be found at https://github.com/google/oss-fuzz/issues/10090#issuecomment-1515709570. That same argument is more or less applicable to the 90-day OSS-Fuzz disclosure but I guess it somehow makes people feel secure for some reason.

Anyway since I'm subscribed to some OSS-Fuzz issues it seems to me that libexpat is in the process of integrating CIFuzz. My journey with pinning it ended in https://github.com/google/clusterfuzzlite/issues/95 (where I gave up). If libexpat could somehow convince OSS-Fuzz to make its actions actually meaningfully pinnable it would be great. (though personally I don't think it's worth it).

spencerschrock commented 1 year ago
  • modify build artifacts

Am I interpreting your scenario right?

  1. Some repo has a workflow which saves a workflow artifact via actions/upload-artifact
  2. Another step later in the workflow is compromised and now calls actions/upload-artifact to modify and overwrite the artifact from 1?
hartwork commented 1 year ago

@spencerschrock what I had in mind was more like:

  1. actions/upload-artifact is used near the end of the workflow
  2. a gone-malicious action earlier in the workflow modifies files in some way so that what is uploaded later will be malicious in some way also
pnacht commented 1 year ago

Is it common for workflow artifacts created with upload-artifact to be used in security-critical processes? Are there projects that use upload-artifact to create their release artifacts (and then manually transfer them to the release, I suppose)?

hartwork commented 1 year ago

@pnacht I can think of two scenarios in particular where using build artifacts in releases can be tempting: make dist (potentially combined with a wish to not leak local files into release archives by accident) and creation of Windows binaries.

evverx commented 1 year ago

I can think of two scenarios in particular where using build artifacts in releases can be tempting: make dist (potentially combined with a wish to not leak local files into release archives by accident) and creation of Windows binaries

Do some projects actually use artifacts that aren't protected from anything including random pull requests to prepare releases?

hartwork commented 1 year ago

@evverx I don't understand the question.

evverx commented 1 year ago

@hartwork sorry. I meant that it's possible to just open a PR and write artifacts without even having to have any sort of permissions (as shown for example in https://github.com/google/clusterfuzzlite/issues/84#issuecomment-1033683135 (though there my concern was that under certain circumstances it was possible to overdraft a bunch of credit cards by allowing random PRs to control the size of copora). If those artifacts then flow to releases somehow it's basically the same as allowing random PRs to push releases. (it could be that I misunderstood the use case though).

laurentsimon commented 1 year ago

They can leak secrets but I don't think pinning stuff to hashes can help much here given that under the hood a lot of actions use a bunch of unpinned modules nobody has ever reviewed

It's true that pinned Actions may download unpinned dependencies. I think this typically happens when the Action uses a binary and runs it as a CLI. For Javascript Actions (the majority, I think), most of hem have a package-lock.json - and hopefully GitHub installs them with npm ci. I have not verified, though - it'd be nice to have answers to these questions

pnacht commented 1 year ago

@pnacht I can think of two scenarios in particular where using build artifacts in releases can be tempting: make dist (potentially combined with a wish to not leak local files into release archives by accident) and creation of Windows binaries.

@hartwork I still don't think I understand these use-cases. If someone wants to create artifacts (i.e. Windows binaries) for a new release, wouldn't they instead add the artifacts directly to the release (using softprops/action-gh-release*) instead of uploading them as workflow artifacts and manually migrating them to the release?

I also didn't understand the concern about "[leaking] local files"... workflow artifacts are also public, so I don't see how using upload-artifact would be safer.


* unfortunately GitHub doesn't have an official "add artifact to release" action anymore; they used to have actions/upload-release-asset, but it's been archived and instead recommends softprops/action-gh-release.

evverx commented 1 year ago

@pnacht I was curious so I looked for actions where upload-artifacts is used. Normally they are full-blown release actions with "contents: write" where artifacts are passed between jobs. The most important part there is that jobs in workflows like that can only download artifacts produced by their internal jobs and in theory it makes it impossible for random PRs to inject their artifacts there. However that restriction isn't convenient in some cases apparently and that's where that two-step pattern comes in. Apparently some projects distribute their nightly builds like that to avoid polluting their releases, sending notification and so on and that's where it gets interesting :-) GitHub UI isn't convenient for people consuming those nightly builds so there are unofficial services that provide direct links to some artifacts. I wouldn't download nightly builds like that :-) but I get why people do that. In my opinion use cases like that shouldn't be encouraged.

hartwork commented 1 year ago

@pnacht maybe so, maybe not. softprops/action-gh-release is third party, needs more permissions, and is arguably harder to use than plain upload-artifact. Also some people like to have some manual hand in publishing/deployment.

I also didn't understand the concern about "[leaking] local files"... workflow artifacts are also public, so I don't see how using upload-artifact would be safer.

@pnacht what I meant is that when producing releases archives on the developer machines there is risk that the resulting archives include files from your development machine that were not intended to even be in the release archives. I know of at least two real cases where that happened. One way to prevent the scenario altogether is by producing release archives in a clean environment, and CI is a candidate for that in theory, depending on your threat model.

pnacht commented 1 year ago

@pnacht what I meant is that when producing releases archives on the developer machines there is risk that the resulting archives include files from your development machine that were not intended to even be in the release archives. I know of at least two real cases where that happened. One way to prevent the scenario altogether is by producing release archives in a clean environment, and CI is a candidate for that in theory, depending on your threat model.

Oh, I'm a big proponent for publishing via CI over laptops. Couldn't agree more! I just misunderstood you before, thought you were talking about using upload-artifacts in CI as being better than using softprops/action-gh-release to avoid leaking info.

evverx commented 1 year ago

softprops/action-gh-release is third party, needs more permissions, and is arguably harder to use than plain upload-artifact

FWIW I don't think it's actively maintained any more. Other than that it doesn't even support make_latest. systemd uses it to create releases when tags are pushed without uploading anything but I think that particular use case can be covered by a small bash script calling https://docs.github.com/en/rest/releases/releases?apiVersion=2022-11-28#create-a-release. That action just calls that method and uses a gazillion JS modules for that for some reason :-)

diogoteles08 commented 1 year ago

So, we agreeing that the @upload-artifacts is not completely safe, even though it can be used with read-only permissions, right?

Getting back to the original discussion of the issue, one straightforward idea is that we could add this as another requirement to have an "allowed unpinned action". So the unpinned actions would be allowed (i.e. not cause a score decrease at the Pinned-Dependencies check) only if the action:

  1. Is not called with any secret
  2. Is not called with any "dangerous permission"
  3. Does not call upload-artifacts (also because the uploaded artifacts could be used by jobs with write-permissions at the same workflow)

WDYT?

hartwork commented 1 year ago

@diogoteles08 I think when starting to look for use of https://github.com/actions/upload-artifact in particular, the question would be how special that action really is. If I use a fork of that action and the action works and does something similar, it would evade detection and still be affected, I suppose? (Maybe I'm biased, I think pinning everything is the best way to go. I seem to mind less than others about the pull request and commit noise that can go along with that.)

evverx commented 1 year ago

@diogoteles08 it seems to me that it would be better to assume that all the actions (regardless of whether they are read-only or not) can be used to produce important things (regardless of whether it's a good idea or not) so I'm inclined to say that scorecard should probably ask projects to pin stuff there too. I think false positives can be suppresed using something like https://github.com/ossf/scorecard/issues/1907.

diogoteles08 commented 1 year ago

Yes that's sad but true

diogoteles08 commented 1 year ago

Best we could do is to penalise less the "allowed actions" I specify here, but not the 0.1 decrease I was talking about, something bigger than that, but still smaller than a reduction for an unpinned action with secrets or dangerous permissions. But even this is debatable. The Pinned-Dependencies current scoring is not simple, so it wouldn't be simple to define this "smaller" reduction

evverx commented 1 year ago

penalise less the "allowed actions"

@diogoteles08 dunno. I think it would just complicate things and I don't think it's worth it. Maybe it would be better if overzealous "security" tools/scanners/analyzers using scorecard were smart enough to deal with that.

I'm not sure what else I can add there so I'm going to focus on that SBOM nonsense. Luckily it hasn't been integrated yet :-) (Don't get me wrong I think SBOMs make sense but "let's push them upstream without any clear rationale and use cases and penalize projects if they don't comply" nonsense is what concerns me).

evverx commented 1 year ago

I think when starting to look for use of https://github.com/actions/upload-artifact in particular, the question would be how special that action really is

@hartwork I missed that comment. Just to clarify I didn't mean to say that there is anything inherently wrong with that action. It does what it's supposed to do. Your use case is safe because somehow I'm pretty sure you can't be tricked into downloading bogus artifacts when you do it manually. Even those nightly builds can be distributed via artifacts properly if they are signed with something random PRs don't have access to (it isn't clear where consumers would verify those signatures :-) but that's another matter). What I was trying to say is that it's a bit harder to protect artifacts because it's easier to mess with them than with actual releases. Then again, the GitHub setting preventing CI from running on PRs submitted by first-time contributors kind of mitigates that too (and that explains why when it was introduced a lot of weird accounts started fixing typos in upstream projects for no apparent reason :-)).

evverx commented 1 year ago

security-events: read can be dangerous. So we might want to define it as a "dangerous permission" and keep punishing unpinned actions that use this permission.

@diogoteles08 I'd punish GitHub here first: https://github.com/ossf/scorecard/issues/3131#issuecomment-1584071497 :-) Either way thanks for bringing it to my attention :-)

evverx commented 1 year ago

It's offtopic here. Sorry about that!

@evverx I understand your complains against False Positives and annoying notification on GitHub and we are taking them seriously! As you said, this current issue intends to reduce False Positives

Since I keep receiving alerts like https://github.com/google/oss-fuzz-vulns/pull/35#issuecomment-1586018930 I'll try to sum up why I think it keeps happening. I think the problem is that tools like scorecard, OSV and so on are developed in a vacuum in the sense that it seems their issues and limitations like https://github.com/google/osv.dev/issues/918 aren't tracked at all. Those limitations aren't documented either. Because of that consumers probably think that all those alerts always make sense and don't even try to question what they see and annoy opensource projects instead. I think it would be nice if maybe the GOSST team could monitor things like that (and as far as I understand @pnacht compares scorecard releases for example (and I think that that testing technique should be used before scorecard is released) but I think it would make sense to go further and look at scorecard results for projects over time and try to figure out whether they make sense and how they can be improved. I don't think that this should be offloaded onto maintainers. Also I took a look at issues reported by @joycebrum, @diogoteles08 and the other members of the team and it's weird to me that they aren't prioritized. I somehow thought that stuff maintainers say was just ignored but it turns out all the feedback is actually relayed thoughtfully. (feature requests targeting consumers seem to be much more important and that's concerning).

Edit: FWIW I stopped adding new stuff to OSS-Fuzz because scanners/CVE-generators and so on are too much: https://github.com/google/oss-fuzz/pull/8699. If the idea is to actually improve anything maybe all that automation shouldn't annoy maintainers.

diogoteles08 commented 1 year ago

Hey @evverx, appreciate that you marked as off-topic, but I'll reply to this and also mark as off-topic because I think I can contribute here. I agree with all what you said and I'm happy to say that we're already working towards this goals. Part of GOSST team is working directly on suggesting and implementing security enhancements on critical OSS projects. We usually use Scorecard to evaluate what and where to work on, so we have direct contacts with false positives or places to improvements. A crucial part of our work is to register the feedbacks and those opportunities to improvement, and use them to prioritize the issues that are more relevante to both customers and maintainers. We're also working towards implementing some of them, as you can see me assigned to this one (although after our discussion I don't think this is a priority anymore :/).

Please read more about what we're doing on our 1-year blogpost

spencerschrock commented 1 year ago

This was brought up in the sync yesterday, and was centered around the optics of false positives. Namely, that if scorecard can't detect if workflow artifacts are being used in a dangerous way, then it shouldn't. This of course is a consumer preference, and some people/use-cases have different tolerances for false positives.

The general consensus was to avoid judging possible misuse of workflow artifacts (although perhaps throwing a log statement). This would keep determination of non-privileged workflow to:

evverx commented 1 year ago

no privileged permissions

Unfortunately scorecard can't detect that reliably because the "read-only" GitHub setting is out of reach. It can still trigger unwanted PRs like https://github.com/step-security/secure-repo/issues/462#issuecomment-1265521602

(Thanks to the link @diogoteles08 posted now I know that that stuff is prompted by bug bounties: https://crowdfunding.lfx.linuxfoundation.org/initiative/049febeb-075e-4cbc-8d98-c5ef62483388/financial. It would be nice by the way if PRs indirectly sponsored by Google/OpenSSF disclosed that).

Use of secrets

If secrets are used to keep, say, "junk" tokens used to push stuff to third-party CI services I'm not sure it should affect repositories. For example back in the day there was a fuzzing service that used tokens to authenticate jobs sent from GitHub. It was known that those tokens could be dumped by any PRs, could be abused to overflow the queue there and so on but that service was willing to accept the risk. I'm not sure how often tokens like that are used but my guess is that the idea is to keep track of secrets used to publish stuff consumed by people and I'm not sure it can be detected automatically.

github-actions[bot] commented 11 months ago

This issue is stale because it has been open for 60 days with no activity.

diogoteles08 commented 10 months ago

Is it common for workflow artifacts created with upload-artifact to be used in security-critical processes? Are there projects that use upload-artifact to create their release artifacts (and then manually transfer them to the release, I suppose)?

I think I've found an example of this: https://github.com/PyTables/PyTables/blob/e37ff3983d4df38761d235fb244e688a5f62f848/.github/workflows/wheels.yml

IIUC, every time a new release is made this workflow is responsible to build and use upload-artifact with the python wheels of the release.

github-actions[bot] commented 7 months ago

This issue is stale because it has been open for 60 days with no activity.

jsoref commented 7 months ago

fwiw actions/upload-artifact@v4 (an API break from v3) generates sealed immutable artifacts -- they can't be changed w/o something that has permission first deleting the artifact.