Open nrb opened 6 months ago
/test ci/prow/e2e-gcp-operator-encryption-single-node
@nrb: The specified target(s) for /test
were not found.
The following commands are available to trigger required jobs:
/test e2e-aws-ovn
/test e2e-aws-ovn-serial
/test e2e-aws-ovn-upgrade
/test e2e-gcp-operator
/test images
/test k8s-e2e-gcp
/test unit
/test verify
/test verify-deps
The following commands are available to trigger optional jobs:
/test e2e-aws-operator-disruptive-single-node
/test e2e-aws-ovn-single-node
/test e2e-azure-ovn
/test e2e-gcp-operator-encryption-aescbc
/test e2e-gcp-operator-encryption-aesgcm
/test e2e-gcp-operator-encryption-perf-aescbc
/test e2e-gcp-operator-encryption-perf-aesgcm
/test e2e-gcp-operator-encryption-perf-single-node
/test e2e-gcp-operator-encryption-rotation-aescbc
/test e2e-gcp-operator-encryption-rotation-aesgcm
/test e2e-gcp-operator-encryption-rotation-single-node
/test e2e-gcp-operator-encryption-single-node
/test e2e-gcp-operator-single-node
/test e2e-metal-ovn-ha-cert-rotation-shutdown-180d
/test e2e-metal-ovn-ha-cert-rotation-shutdown-360d
/test e2e-metal-ovn-ha-cert-rotation-shutdown-90d
/test e2e-metal-ovn-ha-cert-rotation-suspend-180d
/test e2e-metal-ovn-ha-cert-rotation-suspend-360d
/test e2e-metal-ovn-ha-cert-rotation-suspend-90d
/test e2e-metal-ovn-sno-cert-rotation-shutdown-180d
/test e2e-metal-ovn-sno-cert-rotation-shutdown-360d
/test e2e-metal-ovn-sno-cert-rotation-shutdown-90d
/test e2e-metal-ovn-sno-cert-rotation-suspend-180d
/test e2e-metal-ovn-sno-cert-rotation-suspend-1y
/test e2e-metal-ovn-sno-cert-rotation-suspend-2y
/test e2e-metal-ovn-sno-cert-rotation-suspend-3y
/test e2e-metal-ovn-sno-cert-rotation-suspend-90d
/test e2e-metal-single-node-live-iso
/test k8s-e2e-gcp-serial
Use /test all
to run the following jobs that were automatically triggered:
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-aws-operator-disruptive-single-node
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-aws-ovn
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-aws-ovn-serial
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-aws-ovn-single-node
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-aws-ovn-upgrade
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-aescbc
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-aesgcm
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-perf-aescbc
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-perf-aesgcm
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-perf-single-node
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-rotation-aescbc
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-rotation-aesgcm
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-rotation-single-node
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-encryption-single-node
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-gcp-operator-single-node
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-e2e-metal-single-node-live-iso
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-images
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-k8s-e2e-gcp
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-k8s-e2e-gcp-serial
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-unit
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-verify
pull-ci-openshift-cluster-kube-apiserver-operator-release-4.16-verify-deps
/test e2e-aws-operator-disruptive-single-node
/test e2e-gcp-operator-encryption-single-node
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: nrb Once this PR has been reviewed and has the lgtm label, please assign vrutkovs for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
/jira refresh
@JoelSpeed: This pull request references Jira Issue OCPBUGS-34545, which is invalid:
Comment /jira refresh
to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.
The bug has been updated to refer to the pull request using the external bug tracker.
/retest-required
/retest
@nrb: The following tests failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
ci/prow/e2e-gcp-operator-encryption-perf-single-node | 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 | link | false | /test e2e-gcp-operator-encryption-perf-single-node |
ci/prow/e2e-gcp-operator-single-node | 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 | link | false | /test e2e-gcp-operator-single-node |
ci/prow/e2e-aws-operator-disruptive-single-node | 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 | link | false | /test e2e-aws-operator-disruptive-single-node |
ci/prow/e2e-aws-ovn-upgrade | 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 | link | true | /test e2e-aws-ovn-upgrade |
ci/prow/e2e-aws-ovn-serial | 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 | link | true | /test e2e-aws-ovn-serial |
Full PR test history. Your PR dashboard.
@JoelSpeed For some reason I don't understand, the kube-apiserver is still receiving the cloud-config
argument in the failing tests.
Taking the e2e-aws-ovn-upgrade
job as an example, the kube-apiserver log shows FLAG: --cloud-config="/etc/kubernetes/static-pod-resources/configmaps/cloud-config/cloud.conf"
The kubeapiserver-operator log shows \"cloud-config\":[\"/etc/kubernetes/static-pod-resources/configmaps/cloud-config/cloud.conf\"]
I'm not able to find any references to cloud.conf
in the KASO code within this PR. The vendored library-go files have removed references to it. I do see some left over consts, but they aren't reference anywhere.
Is there somewhere else that I need to update? I think I'm missing something obvious.
@nrb I think this is an upgrade issue rather than an install time issue, looking at all of the jobs where the cluster is installed fresh, the issue does not present.
However, the operator today appears to be removing the cloud config configmap prior to rolling out updated pods, that is going to cause the existing pods to crash, and then that in turn will prevent the operator from rolling out the updated version of the pod spec.
We need to make sure the cloud config configmap is not removed until after the new pods have been rolled out.
I would have expected we already saw this with the KCMO but I can't see reference to how we fixed it 🤔
Reproduce: oc get pv/multizone-pv75sdv -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.beta.kubernetes.io/gid: "777"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2024-07-02T05:22:16Z"
finalizers:
- kubernetes.io/pv-protection
generateName: multizone-pv
labels:
topology.kubernetes.io/region: us-central1
topology.kubernetes.io/zone: us-central1-c
name: multizone-pv75sdv
resourceVersion: "59982"
uid: 54b5f58b-0539-4f5f-a33c-e8d741c639ae
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-qflhf
namespace: e2e-multi-az-8087
resourceVersion: "59980"
uid: e97099f8-2bac-426f-b5f4-b93c18eb91d1
gcePersistentDisk:
fsType: ext3
pdName: e2e-3c2ee91b-1f69-44db-be68-191c13ccedae
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-c
- key: topology.kubernetes.io/region
operator: In
values:
- us-central1
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
lastPhaseTransitionTime: "2024-07-02T05:22:17Z"
phase: Bound
After fix up:
oc get pv/multizone-pv967f9 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.beta.kubernetes.io/gid: "777"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2024-07-02T06:06:24Z"
finalizers:
- kubernetes.io/pv-protection
generateName: multizone-pv
name: multizone-pv967f9
resourceVersion: "37953"
uid: 915485a2-5263-4b3a-80d8-9391cb071616
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-2g4vh
namespace: e2e-multi-az-6664
resourceVersion: "37951"
uid: ffc646a2-e671-46b5-8e05-192c6cab2ab1
gcePersistentDisk:
fsType: ext3
pdName: e2e-9a53a94a-6a96-4d9c-9a50-89a93f2593c1
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
lastPhaseTransitionTime: "2024-07-02T06:06:24Z"
phase: Bound
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
are we still keeping this flag for when external ccms are in use?
It isn't required, there's no code in the API server that relies on cloud providers anymore, and there's no need for the API server to be aware of whether we are running an external platform or not either
thanks Joel, i am a little paranoid based on kube-controller-manager stuff we found recently.
This is effectively a backport of https://github.com/openshift/cluster-kube-apiserver-operator/pull/1693 and #1656, combined.
The SHA used for library-go here is https://github.com/openshift/library-go/commit/b8bcc87e7606bb5c96701791d1b20ce7ec24c83e to avoid unrelated library-go changes, and particularly the kube 1.30/go 1.22 bumps.