Closed laurentsimon closed 2 years ago
FYI @azeemsgoogle
Successful results still appear in the GH security dashboard, because we mark them as note. Do we want to keep as-is or remove the notes entirely?
I would personally remove the notes entirely -- I don't think they have much value to the user (who could maybe go see a list of checks anyway on Scorecard), and I think it makes the important stuff less visible.
Long description: shall we keep it long os just give users a URL?
I like it long, don't like link chasing.
Short description should be shorter
Is this the title? If so is it canonical to have the title be the action of the check, or something indicating it's wrong, e.g. difference between "Determines if..." and "Check failed: Project workflow does not follow"
This looks great. Some comments:
Determines if ...
should be shorter and easier to understand. Something like Dependencies are unpinned
.Notes
.Severity
level (Low, Medium etc.)?Thanks both of you for the feedback!
This looks great. Some comments:
- The titles
Determines if ...
should be shorter and easier to understand. Something likeDependencies are unpinned
.- The short description should be shorter. Also, it has some HTML elements which don't get rendered well.
the description is set to the long description atm. It does not support links. Maybe we should remove it? Note that the description is also used by GH next to the line with the rule ID https://github.com/ossf/scorecard/security/code-scanning/2865?query=ref%3Arefs%2Fheads%2Fmain so it's redundant. I have not tested what happens when there are multiple file locations reported.
- Remove alerts which pop up as
Notes
.
Looks like nobody likes it :-) Do we want a GitHub version without and a general SARIF version with it?
- Can we also make use of the
Severity
level (Low, Medium etc.)?
in the making, yes.
note: if a check has multiple findings (e.g. multiple binaries found in repo), GH only shows the first finding... which is not ideal. There's not much I can do about it though...:(
note: if a check has multiple findings (e.g. multiple binaries found in repo), GH only shows the first finding... which is not ideal. There's not much I can do about it though...:(
Interesting. Any reason there can't be multiple findings from a single check?
Unless I've broken something again, you should see severity levels in the scanning alert tab for our GitHub action https://github.com/ossf/scorecard/security/code-scanning
Unless I've broken something again, you should see severity levels in the scanning alert tab for our GitHub action https://github.com/ossf/scorecard/security/code-scanning
It is lot better 👏
Are specific permissions needed to see https://github.com/ossf/scorecard/security/code-scanning ? I receive a 404 @laurentsimon
Are specific permissions needed to see https://github.com/ossf/scorecard/security/code-scanning ? I receive a 404 @laurentsimon
You need write permission to view a summary of all the alerts for a repository on the Security tab. https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/managing-code-scanning-alerts-for-your-repository#viewing-the-alerts-for-a-repository
@tom--pollard thanks for the interest in trying it out! You can easily test it out on a repo you own:
scorecard-policy.yml
under .github/
, as in https://github.com/ossf/scorecard/blob/main/.github/scorecard-policy.ymlscorecard-analysis.yml
workflow under github/workflows/
- just copy paste from https://github.com/ossf/scorecard/blob/main/.github/workflows/scorecard-analysis.ymlThe workflow currently supports cron triggers and push triggers for the main/master branch only. I suggest you try the push trigger on the main branch as configured in our workflow yaml file. As a simple verification test, I created a few dummy .whl
files in https://github.com/laurentsimon/scorecard-action-test. You should see them under https://github.com/<owner>/<repo>/security/code-scanning
.
Let me know if you encounter problems.
@asraa @priyawadhwa would you be interested in trying out the action on sigstore, envoy and tekton repos? It's not ready yet, but sometimes in 1 month from now.
@laurentsimon What's the latest on this? Love seeing the scorecard integrate with Actions and the code-scanning tab. Let me know if you need any connection to get that into the code scanning actions list in the security tab.
Thanks, Also what is the process to get it verified https://docs.github.com/en/developers/github-marketplace/github-marketplace-overview/about-marketplace-badges?
@laurentsimon What's the latest on this? Love seeing the scorecard integrate with Actions and the code-scanning tab. Let me know if you need any connection to get that into the code scanning actions list in the security tab.
Thanks Justin. Is this the same as being published on the market place? (We'd love to do that). What's a typical timeline to have an action published?
@laurentsimon There are a couple of places you can get this listed including the marketplace and the code scanning tab (eg https://github.com/ossf/scorecard/security/code-scanning/setup). We'd be happy to work with you on this. Let me tag my colleague @josepalafox who can follow-up on the process bits?
@laurentsimon There are a couple of places you can get this listed including the marketplace and the code scanning tab (eg https://github.com/ossf/scorecard/security/code-scanning/setup). We'd be happy to work with you on this. Let me tag my colleague @josepalafox who can follow-up on the process bits?
Thanks!
When you make a release of the action there's a checkbox for admins to "publish to marketplace" assuming you have the right permissions you should just see this, else TO-DO is get the permissions or find someone with them that can publish to marketplace:
https://docs.github.com/en/actions/creating-actions/publishing-actions-in-github-marketplace
Next please make a PR using the security starter workflow template here: https://github.com/actions/starter-workflows
Ignore parts of the prompt that are not applicable like "are you a part of the parter program?". The major pieces will be the full SHA, logo, 140 char discription, and the workflow.yml file.
I would like to float some additional development ideas for this action that I think could be really helpful if there's a committee meeting I can attend to share a few ideas let me know.
Thanks for the info. We'll start looking into this.
Feel free to attend our bi-weekly meeting, we'd love to hear you suggestions. The next by-weekly meeting is next Monday at 330pm PST. It's an OSSF public meeting, see https://github.com/ossf/scorecard#connect-with-the-scorecards-community
You can see the meeting notes and add items to it by joining the ossf-scorecard-dev@googlegroups.com
mailing list, as pointed in the link I pasted above.
@laurentsimon I wonder if it would be possible to run it on PRs to analyze diffs to make sure new alerts or regressions aren't introduced? While running it on push events is helpful I think it would be even better if PRs with new alerts didn't make it to repositories in the first place.
yes, we have a working prototype that runs on PR already. GitHub takes care of de-deduplication using the SARIF format, so we (scorecard) need not worry about diff ourselves, which is really helpful.
we have a working prototype that runs on PR already
Good to know. Thanks! Just to be sure, does that mean that I can in theory start using it in my experimental fork?
I took a look at the config at https://github.com/ossf/scorecard/pull/1073/files#diff-4341708db212df7d9bb86a51df68dc97ff9a262aec1eacddff2552a363b08c9f. Is my understanding correct that to turn on just the "token-permission" check I should add something like
version: 1
policies:
Token-Permissions:
score: 10
mode: enforced
?
What happens if I turn on another check where the score is, say, 5 and it stays the same when a PR with new issues is opened?
Thinking about it, looks like some checks like the "Pinned-Dependencies" check should be more granular to make it easier to use them to analyze PRs. Generally I don't care about pip install
scorecard
complains about (which is "won't fix" as far as can tell to judge from the PR where it was introduced) but I'd certainly like to know when new GHActions aren't pinned.
you're right more granularity would be nice. We're working on it but it will take some time.
we have a working prototype that runs on PR already
Good to know. Thanks! Just to be sure, does that mean that I can in theory start using it in my experimental fork?
correct. But we're testing so it may break when we update it. We'll release it publicly in a couple month.
What happens if I turn on another check where the score is, say, 5 and it stays the same when a PR with new issues is opened?
what do you mean?
what do you mean?
I was talking mostly about the "pinned-dependencies" check. I can't set it to 10 due to pip install
in some test script so it seems I have to downgrade it in the config file a bit. My understanding is that when a PR with new alerts (that doesn't affect the score overall) is opened it isn't reported if the score is the same while in some cases I'd like it to be reported. I hope that makes any sense :-)
What's more important I think I wonder if those new alerts introduced in PRs will be visible to external contributors or will they be hidden behind the "security scanning" tab?
what do you mean?
I was talking mostly about the "pinned-dependencies" check. I can't set it to 10 due to
pip install
in some test script so it seems I have to downgrade it in the config file a bit. My understanding is that when a PR with new alerts (that doesn't affect the score overall) is opened it isn't reported if the score is the same while in some cases I'd like it to be reported. I hope that makes any sense :-)
it does. It's a limitation of the current checks. This will be fixed in 2022 :-)
What's more important I think I wonder if those new alerts introduced in PRs will be visible to external contributors or will they be hidden behind the "security scanning" tab?
I don't think they will. The are still hidden behind the scanning dashboard.
That would make those PR checks less useful than they could probably be I think because contributors would need to wait for maintainers to go to the dashboard and report those issues to them (and nobody is going to do that probably). Generally I don't think new alerts should be hidden. They are new after all
I think you're right: they must be available. Since they are introduced by the PR a contributor wrote, there's no reason to hide them. I have not verified, though.
@laurentsimon I ran the action and noticed that the "binary artifacts" alerts I complained a lot about were classified as "(Test)":
so it seems https://github.com/ossf/scorecard/issues/1270 could be put on hold in a way.
Anyway, is that a GitHub feature? If so, I wonder how it's implemented? It would help to solve https://github.com/ossf/scorecard/issues/1256
I think it's a GitHub feature, yes. I suspect they look for test in path. @josepalafox do you know?
@josepalafox thank you for the link! Unfortunately it isn't mentioned there how it's implemented. My guess would be that the code assigning labels to alerts came from LGTM and based on my experience with it it's pretty tuned so it would be great if it could be borrowed somehow. I opened https://github.com/ossf/scorecard/pull/1263 a few days ago where I filtered out all the files containing test
but I'm concerned that it's too broad.
@laurentsimon looks like some checks like "Fuzzing" and "CII-Best-Practices" shouldn't pop up when the action is run on forks (to be honest, I don't think "CII-Best-Practices" should pop up anywhere at all even though all the projects I more or less contribute to have it). Anyway, would it be possible for the action to avoid reporting some issues depending on where it's run. I of course can always add github.repository == 'systemd/systemd'
to block the action but that would prevent external contributors from getting relevant alerts easily
@laurentsimon looks like some checks like "Fuzzing" and "CII-Best-Practices" shouldn't pop up when the action is run on forks (to be honest, I don't think "CII-Best-Practices" should pop up anywhere at all even though all the projects I'm more or less contribute to have it). Anyway, would it be possible for the action to avoid reporting some issues depending on where it's run. I of course can always add
github.repository == 'systemd/systemd'
to block the action but that would prevent external contributors from getting relevant alerts easily
Interesting idea. Which checks would you like to keep in a fork?
@laurentsimon initially I thought that "Fuzzing", "CII-Best-Practices", "Signed-Releases", "Branch-Protection" and "Packaging" should be excluded there and then I realized that some forks are used to keep "stable" branches of upstream projects that are used by downstream projects like various linux distributons and there "Signed-Releases", "Branch-Protection" and "Packaging" probably shouldn't be skipped. so it seems only "Fuzzing" and "CII-Best-Practices" can be safely turned off when forks are analyzed. The other checks I think should probably be configured via the config file and if: github.repository
and that's already possible as far as I can tell
Regarding "CII-Best-Practices" I think it shouldn't affect the score at all and if it should be promoted for whatever reason it would make sense to make it informational. The problem is that all that check shows is that at some point in the past some projects followed the practices CII considers "best practices" and generally they aren't kept up to date so looking at a badge I can't be sure that whatever it says is up to date. For example, at some point the systemd project used only Coverity Scan and when it was down for almost half a year the badge kept saying a static analysis tool was used. When projects migrate from one hosting service to another they tend to lose a couple of jobs where code is covered with sanitizers for example and their badges don't reflect that. This static nature of the CII badges I think isn't exactly in line with scorecard
being able to show the actual state.
@laurentsimon trying to figure out why I got
I noticed that the action uses 2.2.8-103-g12ef070 (which as far as I can tell is kind of old). I wonder if it would be possible to point the action to a version of scorecard
with all the changes to the "Binary-Artifact", "Fuzzing", "SAST", "CI-Test", "Protected-Branches" checks that have been merged this week included?
Also it doesn't seem to be possible to pass "--verbosity Debug" to scorecard
via the action, which makes it hard to figure out for example why scorecard thinks that PRs weren't reviewed even though were or where SASTs were lost. I'm not sure where those debug message should end up in the end though (maybe, right where short descriptions are?)
@laurentsimon trying to figure out why I got I noticed that the action uses 2.2.8-103-g12ef070 (which as far as I can tell is kind of old). I wonder if it would be possible to point the action to a version of
scorecard
with all the changes to the "Binary-Artifact", "Fuzzing", "SAST", "CI-Test", "Protected-Branches" checks that have been merged this week included?
We've seen this error before. Thanks for sharing. @azeemshaikh38 FYI @@josepalafox do you know why this happens sometimes only? It looks like a permission problem, but happens very rarely so very hard to reproduce and understand the root cause.
Also it doesn't seem to be possible to pass "--verbosity Debug" to
scorecard
via the action, which makes it hard to figure out for example why scorecard thinks that PRs weren't reviewed even though were or where SASTs were lost. I'm not sure where those debug message should end up in the end though (maybe, right where short descriptions are?)
great point. I could do a debug build. Currently scorecard ignores Debug messages when creating SARIF output. I can hack something up temporarily to help you debug. I could add it to the description maybe for debugging?
@laurentsimon initially I thought that "Fuzzing", "CII-Best-Practices", "Signed-Releases", "Branch-Protection" and "Packaging" should be excluded there and then I realized that some forks are used to keep "stable" branches of upstream projects that are used by downstream projects like various linux distributons and there "Signed-Releases", "Branch-Protection" and "Packaging" probably shouldn't be skipped. so it seems only "Fuzzing" and "CII-Best-Practices" can be safely turned off when forks are analyzed. The other checks I think should probably be configured via the config file and
if: github.repository
and that's already possible as far as I can tell
can you create a tracking issue for this?
I can hack something up temporarily to help you debug. I could add it to the description maybe for debugging?
I think this feature should be permanent to make it easier for maintainers to dispute scorecard findings :-) It should be turned off by default probably though
can you create a tracking issue for this?
@laurentsimon I'm not sure the action can be used when PRs are opened until issues like https://github.com/ossf/scorecard/issues/1268#issuecomment-972705617 keep affecting the checks unpredictably. Diffs will most likely trigger bogus alerts unrelated to PRs due to checks like SAST and CI-Test producing different scores depending on when they are run.
these checks are not enabled on pull requests by default. The reasoning is that a pull request cannot fix them, so a developer would need to disable scorecard or change its config until the checks passes. We considered this a usability issue. In a nutshell, we only enable checks that a PR may have an effect on, i.e., those that only use source code and not GitHub APIs. It's indicated in the https://github.com/ossf/scorecard/blob/main/docs/checks/internal/checks.yaml#L21: GitHub
means it uses GitHub APIs and cannot be run on pull request; local
means it can run on locally downloaded repo and will be run on pull requests. Thoughts?
Good to know. Thanks! I think that makes sense. I saw --local
in https://github.com/ossf/scorecard-actions/blob/main/analyze/entrypoint.sh but wasn't sure what that is :-)
FYI, I'm going to be updating the repo of the scorecard action according to https://docs.github.com/en/actions/creating-actions/publishing-actions-in-github-marketplace#about-publishing-actions, so the scorecard action will break sometimes in the next few days. I'll post a message here when the transition is done.
I'm doing some dogfooding. Given the issue https://github.com/ossf/scorecard/issues/1415, I've created a second action to see if folks prefer the original one or the v2 one. This dogfoodv2 creates different result entries in the dashboard for all checks, including the Branch-Protection settings, Dependency-Update-Tool and SAST (those had a single entry in the dasboard in the previous version).
For Dependency-Update-Tool, this mean one entry is created if dependabot is disabled and another one if renovabot is disabled. If at least one of the tools is configured, both dashboard results will be marked as "fixed" and will no longer show up. I wonder if you find this good or confusing. Essentially, every Detail
present in the scorecard's JSON results are used to create a dashboard entry.
For SAST, it means you'd see one entry for X commits out of Y are checked with a SAST tool
and one entry for CodeQL tool not detected
. This seems OK since a user may configure CodeQL and it will take time for the tool to catch up and be run on all commits after config is enabled. So long as at least 1 commit is not checked by a SAST tool, the X commits out of Y are checked with a SAST tool
entry will remain.
For Branch-Protection, it's pretty simple: one entry for each (settings,branchname) pair.
Here are the steps if you'd like to help me compare it to the previous version:
Closing this as we have been already using it.
once https://github.com/ossf/scorecard/pull/1073 is landed. For each push, scorecard will run and the results will be available in https://github.com/ossf/scorecard/security/code-scanning
I would like some feedback about the results. I've already noticed some improvements needed:
note
. Do we want to keep as-is or remove the notes entirely? Note that it may be useful to keep it in SARIF format in general, but remove it for GitHub actions only.