Open jeremyrickard opened 1 year ago
@cpanato this seems to be another race we hit when signing release artifacts. Do you want to give this a look? (maybe @puerco already did)
/remove-label priority/important-soon /priority critical-urgent
@xmudrii: The label(s) /remove-label priority/important-soon
cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor
. Is this label configured under labels -> additional_labels
or labels -> restricted_labels
in plugin.yaml
?
@kubernetes/release-managers Carlos is out for a couple of days, do you we have any volunteer to support here?
First investigation: The certificate (.cert
) file has to be written in cosign, after writing the signature:
https://github.com/sigstore/cosign/blob/d1c6336475b4be26bb7fb52d97f56ea0a1767f9f/cmd/cosign/cli/sign/sign_blob.go#L120-L129
It looks like that we never come to the point where the file has to be written, so I'm assuming that len(rekorBytes) == 0
:thinking:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/assign
@jeremyrickard would it be possible to see more of the logs? I can't access them with the links above.
I am trying to create some context for myself to understand where in the process this happens. Is this the part triggered by krel release
? If so where during that part are we doing a blob sign.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
What happened:
On a couple of the patch releases, we hit flakes with signing and needed to re-run the no mock release stages:
Signing Flake for 1.24.9
Signing Flake on 1.23.15
Signing Flake on 1.22.17
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
cat /etc/os-release
):uname -a
):