Closed sedefsavas closed 2 weeks ago
cc @randomvariable @detiber
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/lifecycle frozen
Related to this issue: https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/2088
/remove-lifecycle frozen
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/triage accepted /remove-lifecycle stale /life-cycle active
/lifecycle active
There are several PRs open around this issue regarding github actions ATM and permissions for running them.
/priority important-soon
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
/triage accepted
(org members only)/priority important-longterm
or /priority backlog
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted /priority important-longterm
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
/triage accepted
(org members only)/priority important-longterm
or /priority backlog
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/retitle AMI build/test/publish automation
/assign /lifecycle active
/triage accepted /priority critical-urgent
With the new AWS account i have started work on the automation for this using GHA.
This issue is labeled with priority/critical-urgent
but has not been updated in over 30 days, and should be re-triaged.
Critical-urgent issues must be actively worked on as someone's top priority right now.
You can:
/triage accepted
(org members only)/priority {important-soon, important-longterm, backlog}
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
We have some automation now with GHA. So
/close
@richardcase: Closing this issue.
Based on the feedbacks in https://docs.google.com/document/d/142YzzRj2H_OWEUE03Vrw3D-XpDCWM8RviFGHT5-RBxo/edit?usp=sharing:
An AMI-version.yaml will be added to the CAPA repository that contains all Kubernetes releases that CAPA AMIs are created from.
AMI-version.yaml
can be similar to:Aso, a
CAPA-robot
similar to k8s-release-robot will be created to automatically create PRs when there are new Kubernetes releases.Roughly, we will need 4 different Prow jobs to automate build/test/publish workflow:
Periodic Kubernetes release detection job: will periodically checks if there is a new Kubernetes release and if there is a new release, it creates a
new-release PR
using CAPA-bot to CAPA repository by adding the new release to theAMI-version.yaml
.Pre-submit test-AMI build job: This job will get triggered when there is a PR that modifies
AMI-version.yaml
likenew-release PR
. It builds AMIs following the changes in theAMI-version.yaml
. These AMIs will be used to run conformance tests. Here, we need to have a way to detect changes in theAMI-version.yaml
and only act on them. api-machinery is one option. This job will use CNCF AWS account.Pre-submit AMI conformance test job: will detect the changes in
AMI-version.yaml
and run conformance tests using only the AMIs that are recently built. This job will use test AWS account.Post-submit AMI promote job: After manually merging the PR created by the
CAPA-robot
(after observing conformance tests pass), new “published” AMIs will be created referencing the same EBS snapshot used by test AMIs, which acts as “promotion” of the image. This job will use CNCF AWS account.Pre-submit clean-up job: This can be a job that will be triggered manually to delete all the newly created AMIs after a PR is created. This may become handy if conformance tests fails and we will need to retrigger
Pre-submit test-AMI build job
.To initiate a rebuild for either fixing a previous AMI image or including some OS patches; instead of the periodic job, a PR will be manually created with a new BUILD_REVISION. The rest of the workflow is same with creating a new AMI.
Follow up issue to https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/1861