openshift / cluster-kube-apiserver-operator

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster
Apache License 2.0
74 stars 159 forks source link

OCPBUGS-34545: Remove cloud-config and cloud-provider arguments #1696

Open nrb opened 6 months ago

nrb commented 6 months ago

This is effectively a backport of https://github.com/openshift/cluster-kube-apiserver-operator/pull/1693 and #1656, combined.

The SHA used for library-go here is https://github.com/openshift/library-go/commit/b8bcc87e7606bb5c96701791d1b20ce7ec24c83e to avoid unrelated library-go changes, and particularly the kube 1.30/go 1.22 bumps.

nrb commented 5 months ago

/test ci/prow/e2e-gcp-operator-encryption-single-node

openshift-ci[bot] commented 5 months ago

@nrb: The specified target(s) for /test were not found. The following commands are available to trigger required jobs:

The following commands are available to trigger optional jobs:

Use /test all to run the following jobs that were automatically triggered:

In response to [this](https://github.com/openshift/cluster-kube-apiserver-operator/pull/1696#issuecomment-2145434822): >/test ci/prow/e2e-gcp-operator-encryption-single-node Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
nrb commented 5 months ago

/test e2e-aws-operator-disruptive-single-node

nrb commented 5 months ago

/test e2e-gcp-operator-encryption-single-node

openshift-ci[bot] commented 5 months ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: nrb Once this PR has been reviewed and has the lgtm label, please assign vrutkovs for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/openshift/cluster-kube-apiserver-operator/blob/release-4.16/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
JoelSpeed commented 5 months ago

/jira refresh

openshift-ci-robot commented 5 months ago

@JoelSpeed: This pull request references Jira Issue OCPBUGS-34545, which is invalid:

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to [this](https://github.com/openshift/cluster-kube-apiserver-operator/pull/1696#issuecomment-2158522419): >/jira refresh Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fcluster-kube-apiserver-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
JoelSpeed commented 5 months ago

/retest-required

nrb commented 5 months ago

/retest

openshift-ci[bot] commented 5 months ago

@nrb: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-operator-encryption-perf-single-node 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 link false /test e2e-gcp-operator-encryption-perf-single-node
ci/prow/e2e-gcp-operator-single-node 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 link false /test e2e-gcp-operator-single-node
ci/prow/e2e-aws-operator-disruptive-single-node 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 link false /test e2e-aws-operator-disruptive-single-node
ci/prow/e2e-aws-ovn-upgrade 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 link true /test e2e-aws-ovn-upgrade
ci/prow/e2e-aws-ovn-serial 6cf03c295c3f7fc2cff7bdc96ba4731c57685419 link true /test e2e-aws-ovn-serial

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository. I understand the commands that are listed [here](https://go.k8s.io/bot-commands).
nrb commented 5 months ago

@JoelSpeed For some reason I don't understand, the kube-apiserver is still receiving the cloud-config argument in the failing tests.

Taking the e2e-aws-ovn-upgrade job as an example, the kube-apiserver log shows FLAG: --cloud-config="/etc/kubernetes/static-pod-resources/configmaps/cloud-config/cloud.conf"

The kubeapiserver-operator log shows \"cloud-config\":[\"/etc/kubernetes/static-pod-resources/configmaps/cloud-config/cloud.conf\"]

I'm not able to find any references to cloud.conf in the KASO code within this PR. The vendored library-go files have removed references to it. I do see some left over consts, but they aren't reference anywhere.

Is there somewhere else that I need to update? I think I'm missing something obvious.

JoelSpeed commented 5 months ago

@nrb I think this is an upgrade issue rather than an install time issue, looking at all of the jobs where the cluster is installed fresh, the issue does not present.

However, the operator today appears to be removing the cloud config configmap prior to rolling out updated pods, that is going to cause the existing pods to crash, and then that in turn will prevent the operator from rolling out the updated version of the pod spec.

We need to make sure the cloud config configmap is not removed until after the new pods have been rolled out.

I would have expected we already saw this with the KCMO but I can't see reference to how we fixed it 🤔

chao007 commented 5 months ago

Reproduce: oc get pv/multizone-pv75sdv -o yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.beta.kubernetes.io/gid: "777"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2024-07-02T05:22:16Z"
  finalizers:
  - kubernetes.io/pv-protection
  generateName: multizone-pv
  labels:
    topology.kubernetes.io/region: us-central1
    topology.kubernetes.io/zone: us-central1-c
  name: multizone-pv75sdv
  resourceVersion: "59982"
  uid: 54b5f58b-0539-4f5f-a33c-e8d741c639ae
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc-qflhf
    namespace: e2e-multi-az-8087
    resourceVersion: "59980"
    uid: e97099f8-2bac-426f-b5f4-b93c18eb91d1
  gcePersistentDisk:
    fsType: ext3
    pdName: e2e-3c2ee91b-1f69-44db-be68-191c13ccedae
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - us-central1-c
        - key: topology.kubernetes.io/region
          operator: In
          values:
          - us-central1
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
status:
  lastPhaseTransitionTime: "2024-07-02T05:22:17Z"
  phase: Bound

After fix up:

oc get pv/multizone-pv967f9 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.beta.kubernetes.io/gid: "777"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2024-07-02T06:06:24Z"
  finalizers:
  - kubernetes.io/pv-protection
  generateName: multizone-pv
  name: multizone-pv967f9
  resourceVersion: "37953"
  uid: 915485a2-5263-4b3a-80d8-9391cb071616
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc-2g4vh
    namespace: e2e-multi-az-6664
    resourceVersion: "37951"
    uid: ffc646a2-e671-46b5-8e05-192c6cab2ab1
  gcePersistentDisk:
    fsType: ext3
    pdName: e2e-9a53a94a-6a96-4d9c-9a50-89a93f2593c1
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
status:
  lastPhaseTransitionTime: "2024-07-02T06:06:24Z"
  phase: Bound
openshift-bot commented 2 months ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

nrb commented 1 month ago

/remove-lifecycle stale

JoelSpeed commented 1 month ago

are we still keeping this flag for when external ccms are in use?

It isn't required, there's no code in the API server that relies on cloud providers anymore, and there's no need for the API server to be aware of whether we are running an external platform or not either

elmiko commented 1 month ago

thanks Joel, i am a little paranoid based on kube-controller-manager stuff we found recently.