Open felickz opened 10 months ago
Hi @felickz, I really appreciate the reproduction and detailed logs! Much of this does sound very strange to me, but one thing sticks out as especially strange:
Most interesting is that re-running dependency review task at any point in the future succeeds after 2 tries
Can you clarify that this applies truly at any point? In other words, if you wait for a very long time (like an hour), does it still retry at least once before succeeding?
Hi @felickz, I really appreciate the reproduction and detailed logs! Much of this does sound very strange to me, but one thing sticks out as especially strange:
Most interesting is that re-running dependency review task at any point in the future succeeds after 2 tries
Can you clarify that this applies truly at any point? In other words, if you wait for a very long time (like an hour), does it still retry at least once before succeeding?
In the sample is showed - it was @ 15 mins:
2023-12-04T18:13:32.1693442Z Retry timeout exceeded. Proceeding...
2023-12-04T18:30:30.0878871Z - retry succeed
In a previous repro - i had run another time with 5 minutes between with similar behavior, but rerun did not mention any no snapshots found - (initial run):
2023-12-04T17:59:17.1762701Z Retry timeout exceeded. Proceeding...
2023-12-04T18:04:56.0137937Z Dependency review did not detect any denied packages
@febuiles I don't have a ton of time to devote to this and nothing's really jumping out at me, but I do have a vague suspicion that this may be related to https://github.com/github/dependency-snapshots-api/pull/615. I thought that would only change the way we handle new canonical snapshots and that shouldn't matter here, but if there's some reason why it's taking a while for new snapshots to be written to the ds_snapshots
table then that could explain what's going on here.
@juxtin thanks for the extra feedback 🙇
@jamisonhyatt any ideas if the locks-related change could be having an impact downstream like this? Does any of this look suspicious to you?
I have another oddity where it seems submission actions like anchore/sbom-action
are using a merge commit when they upload a snapshot when running as an on pull_request
trigger:
##[debug] "eventName": "pull_request",
##[debug] "sha": "6f50a7568be909f93a78e164a90362a365f92a44",
##[debug] "ref": "refs/pull/22/merge",
##[debug] "workflow": "Syft SBOM Action",
##[debug] "action": "__anchore_sbom-action_2",
##[debug] "actor": "felickz",
##[debug] "job": "buildAndUpload",
##[debug] "runNumber": 214,
##[debug] "runId": 8290771455,
Then the DR action (with on pull_request
trigger) is looking for sha of the latest commit in the HEAD branch.
Leading to: No snapshots were found for the head SHA a9e00023489e612bb5cfbc81ea97202a30af124e.
Is this something the toolkit is intended to handle?
I am not totally clear what this sha is showing me, this might be a temporary hidden merge branch that is used to check "This branch has no conflicts with the base branch"? Would it make any sense for DR to be also looking at this SHA for snapshot submissions?
Need to do research if the issue is with the backend service or potential race condition
I also encountered this problem, only I'm using the gradle/actions/dependency-submission
action to submit the dependency snapshot.
With an added known vulnerable dependency I could re-run the review workflow at least 3 times all ending in a timeout and no vulnerabilites reported. The timeout was set to 10 minutes and the three times were run consecutively ...
Then, after waiting for maybe 30+ minutes (searching for answers and coming back for debug-logs), the re-runs suddenly started consistently reporting a "vulnerable dependency detected". It reminded me of how replication lag behaves as the snapshot propagates across data stores ..
I couldn't find an option for failing the build on timeouts, and with the green checkmark on the PR I doubt many people will study the logs to see if it all ran as they expected.
Should it fail, or at least give a warning, when the timeout expires and there's still no snapshot?
Hello everyone,
I believe this issue with the dependency-review-action
that renders it completely unusable for Gradle projects.
Following the documentation for gradle-build-action, setup-gradle, and dependency-submission, I've configured the following workflow files:
Generating a dependency graph and uploading it to artifacts: code-scanning.yml; Run log
Waiting for the first step to complete, then downloading the dependency graph from artifacts and submitting it: dependency-graph.yml; Run log
Waiting for the second step to complete, updating snapshots, and checking if all introduced/updated dependencies in the PR are Apache-2.0 compatible: license.yml; Run log
However, even though the second step has been completed, the dependency-review-action
consistently outputs the following log:
Retrying in 10 seconds...
No snapshots were found for the head SHA 96f39e13dc6d1db148fc6e8cacaac18fdb2ae285.
Retry timeout exceeded. Proceeding...
Dependency review did not detect any denied packages
Additionally, the dependency-review-action
fails to detect any changes in Java dependencies in this PR.
I kindly request the community to address this issue as soon as possible. My PR, which I've invested considerable research time into, is currently stalled because of this issue. Thank you very much.
Using
retry-on-snapshot-warnings
for a submission from a different workflow as described in the docs. If the snapshot upload completes during the phase where the review task is waiting for an upload against the head SHA - none of the retries pick it up. If you re-run the review workflow it picks up the newly committed snapshot.On Push:
On PR:
Retry timeout exceeded. Proceeding...
after 4m 37s)Submission Workflow
63d50c7154fc8bfb6ce9173f0d0edfe5f31d810f
Snapshot submission to
"sha": "63d50c7154fc8bfb6ce9173f0d0edfe5f31d810f"
+"ref": "refs/heads/feature/FSharp-Data"
... and completes within a second at
2023-12-04T18:10:55.414Z
:Review Workflow
Review Workflow - run 2
Most interesting is that re-running dependency review task at any point in the future succeeds after 2 tries (doesnt mention a snapshot found but looking at the detections it has found dependencies that only exist in the snapshot manifest):