openshift / machine-config-operator

Apache License 2.0
244 stars 396 forks source link

MCO-1202: MCO-1203: MCO-1204: MCO-1205: MCO-1213: Implementing tlsSecurityProfile for MCO #4435

Open djoshy opened 3 weeks ago

djoshy commented 3 weeks ago

- What I did

In regular cluster operation:

During cluster bootstrap:

- How to verify it

  1. Bring up a cluster with an install time APIServer manifest named "cluster". To test bootstrap behavior, ensure environment variable OPENSHIFT_INSTALL_PRESERVE_BOOTSTRAP is set to true.
  2. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. These can be seen as arguments for the kube rbac proxy sidecards on the MCO deployments, as well as encapsulated in MachineConfigs for the kube-rbac-proxy-crio pods.
  3. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this comment.
  4. You can also log in to the bootstrap node and verify that the MCS bootstrap logs started with the correct TLS settings.
  5. Switch between TLS profiles by following the documentation.
  6. This will cause a new MachineConfig rollout and also cause the MCC, MCD and MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster.

Things to note:

openshift-ci[bot] commented 3 weeks ago

Skipping CI for Draft Pull Request. If you want CI signal for your change, please convert it to an actual PR. You can still manually trigger a test run with /test all

openshift-ci-robot commented 3 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): > > >**- What I did** > >**- How to verify it** > >**- Description for the changelog** > > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci-robot commented 3 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >- Fetch global tlsSecurityProfile from the APIServer object >- Render TLSMinVersion and TLSCiphers to MCC and MCD kube-rbac-proxy sidecar manifests via updated templates >- Render TLSMinVersion and TLSCiphers to MCS daemonset manifest via updated templates >- The MCS will attempt to create a http server with these new TLS settings while starting up. During bootstrap, the MCS will default to the intermediate security profile. > >**- How to verify it** >1. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >2. Test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >3. The endpoint should switch between the ciphers and TLSMinVersion as you switch between Intermediate and Old. **Note**: The APIServer rejects the Modern profile at the moment. Please note that it may also take a few moments before the operator rolls out the new manifests. If it is done while the MCO is mid-upgrade, the manifests won't be updated until the master pool is finished updating. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
djoshy commented 3 weeks ago

/test e2e-hypershift /test e2e-gcp-op-techpreview

openshift-ci-robot commented 3 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >In regular cluster operation: >- The operator and controller will listen on an APIServer object, which contain the [global cluster tls settings](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >- The operator will render TLSMinVersion and TLSCiphers to MCC and MCD kube-rbac-proxy sidecar manifests via templated arguments. >- The template controller will render TLSMinVersion and TLSCiphers to kube-rbac-proxy-crio metrics pods via templated arguments. These are per node pods in the MCO namespace, and deployed via MachineConfigs. As a result changing the tls settings will cause a MachineConfig rollout. >- Added new TLS arguments for the MCS container. The operator will render TLSMinVersion and TLSCiphers via these new arguments in the MCS daemonset manifest. The MCS will attempt to create a http server with these new TLS settings while starting up. > >During cluster bootstrap: >- The MCC will default to the intermediate security profile, unless an install time APIServer manifest is provided. This is necessary as the kube-rbac-proxy-crio pod manifests show up in rendered MachineConfigs and we need the bootstrap MachineConfigs to match the in cluster MachineConfigs post install. >- The MCS will default to the intermediate security profile. I plan to make this capable of reading the install time APIServer manifest too, but it does not cause a MachineConfig update, so it does not need to be solved within this PR. > >**- How to verify it** >1. Bring up a cluster with an install time APIServer manifest named "cluster". >2. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. >3. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >4. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >5. This will cause a new MachineConfig rollout and also cause the MCD/MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster. > >**Things to note**: > >- It may take a few moments before the operator rolls out the new manifests. If it is done while the MCO is doing a MachineConfig update to the master pool, the manifests won't be updated until the master pool is finished updating. This is because of the way the operator's sync loops are structured. >- The APIServer rejects the Modern profile at the moment, according to the docs. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci-robot commented 3 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >In regular cluster operation: >- The operator and controller will listen on an APIServer object, which contain the [global cluster tls settings](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >- The operator will render TLSMinVersion and TLSCiphers to MCC and MCD kube-rbac-proxy sidecar manifests via templated arguments. >- The template controller will render TLSMinVersion and TLSCiphers to kube-rbac-proxy-crio metrics pods via templated arguments. These are per node pods in the MCO namespace, and deployed via MachineConfigs. As a result changing the tls settings will cause a MachineConfig rollout. >- Added new TLS arguments for the MCS container. The operator will render TLSMinVersion and TLSCiphers via these new arguments in the MCS daemonset manifest. The MCS will attempt to create a http server with these new TLS settings while starting up. > >During cluster bootstrap: >- The MCC will default to the intermediate security profile, unless an install time APIServer manifest is provided. This is necessary during bootstrap, as the kube-rbac-proxy-crio pod manifests show up in rendered MachineConfigs and we need the bootstrap MachineConfigs to match the in cluster MachineConfigs post install. >- The MCS will default to the intermediate security profile. I plan to make this capable of reading the install time APIServer manifest too, but it does not cause a MachineConfig update, so it does not need to be solved within this PR. > >**- How to verify it** >1. Bring up a cluster with an install time APIServer manifest named "cluster". >2. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. >3. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >4. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >5. This will cause a new MachineConfig rollout and also cause the MCD/MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster. > >**Things to note**: > >- It may take a few moments before the operator rolls out the new manifests. If it is done while the MCO is doing a MachineConfig update to the master pool, the manifests won't be updated until the master pool is finished updating. This is because of the way the operator's sync loops are structured. >- The APIServer rejects the Modern profile at the moment, according to the docs. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci-robot commented 3 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >In regular cluster operation: >- The operator and controller will listen on an APIServer object, which contains the [global cluster tls settings](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >- The operator will now render TLSMinVersion and TLSCiphers to the MCC and MCD kube-rbac-proxy sidecar manifests via templated arguments. >- The template controller will now render TLSMinVersion and TLSCiphers to the kube-rbac-proxy-crio metrics pods via templated arguments. These are per node pods in the MCO namespace, and deployed via MachineConfigs. As a result changing the tls settings will cause a MachineConfig rollout. >- The MCS container now has two new TLS arguments. The operator will render TLSMinVersion and TLSCiphers via these new arguments in the MCS daemonset manifest. The MCS will attempt to create a http server with these new TLS settings while starting up. > >During cluster bootstrap: >- The MCC will default to the intermediate security profile, unless an install time APIServer manifest is provided. This is necessary during bootstrap, as the kube-rbac-proxy-crio pod manifests show up in rendered MachineConfigs and we need the bootstrap MachineConfigs to match the in cluster MachineConfigs post install. >- The MCS will default to the intermediate security profile. I plan to make this capable of reading the install time APIServer manifest too, but it does not cause a MachineConfig update, so it does not need to be solved within this PR. > >**- How to verify it** >1. Bring up a cluster with an install time APIServer manifest named "cluster". >2. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. >3. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >4. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >5. This will cause a new MachineConfig rollout and also cause the MCC, MCD and MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster. > >**Things to note**: > >- It may take a few moments before the operator rolls out the new manifests. If it is done while the MCO is doing a MachineConfig update to the master pool, the manifests won't be updated until the master pool is finished updating. This is because of the way the operator's sync loops are structured. >- The APIServer rejects the Modern profile at the moment, according to the docs. >- If no tlsSecurityProfile is provided, the MCO will default to Intermediate. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci-robot commented 2 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >In regular cluster operation: >- The operator and controller will listen on an APIServer object, which contains the [global cluster tls settings](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >- The operator will now render TLSMinVersion and TLSCiphers to the MCC and MCD kube-rbac-proxy sidecar manifests via templated arguments. >- The template controller will now render TLSMinVersion and TLSCiphers to the kube-rbac-proxy-crio metrics pods via templated arguments. These are per node pods in the MCO namespace, and deployed via MachineConfigs. As a result changing the tls settings will cause a MachineConfig rollout. >- The MCS container now has two new TLS arguments. The operator will render TLSMinVersion and TLSCiphers via these new arguments in the MCS daemonset manifest. The MCS will attempt to create a http server with these new TLS settings while starting up. > >During cluster bootstrap: >- The bootstrap MCC will default to the intermediate security profile, unless an install time APIServer manifest is provided. This is necessary during bootstrap, as the kube-rbac-proxy-crio pod manifests show up in rendered MachineConfigs and we need the bootstrap MachineConfigs to match the in cluster MachineConfigs post install. >- The bootstrap MCC has access to all install time manifests provided to the installer, but the bootstrap MCC does not. When the bootstrap MCC determines that an APIServer manifest has been provided, it will write it to bootstrap MCS's directory. >- The bootstrap MCS will now read in the APIServer manifest and launch its http server with the TLS settings defined in the manifest. If no manifest is provided, it will default to the intermediate security profile. > >**- How to verify it** >1. Bring up a cluster with an install time APIServer manifest named "cluster". To test bootstrap behavior, ensure >environment variable `OPENSHIFT_INSTALL_PRESERVE_BOOTSTRAP` is set to true. >3. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. These can be seen as arguments for the kube rbac proxy sidecards on the MCO deployments, as well as encapsulated in MachineConfigs for the kube-rbac-proxy-crio pods. >5. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >6. You can also log in to the bootstrap node and verify that the MCS bootstrap logs started with the correct TLS settings. >7. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >8. This will cause a new MachineConfig rollout and also cause the MCC, MCD and MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster. > >**Things to note**: > >- It may take a few moments before the operator rolls out the new manifests. If it is done while the MCO is doing a MachineConfig update to the master pool, the manifests won't be updated until the master pool is finished updating. This is because of the way the operator's sync loops are structured. >- The APIServer rejects the Modern profile at the moment, according to the docs. >- If no tlsSecurityProfile is provided, the MCO will default to Intermediate. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci-robot commented 2 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >In regular cluster operation: >- The operator and controller will listen on an APIServer object, which contains the [global cluster tls settings](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >- The operator will now render TLSMinVersion and TLSCiphers to the MCC and MCD kube-rbac-proxy sidecar manifests via templated arguments. >- The template controller will now render TLSMinVersion and TLSCiphers to the kube-rbac-proxy-crio metrics pods via templated arguments. These are per node pods in the MCO namespace, and deployed via MachineConfigs. As a result changing the tls settings will cause a MachineConfig rollout. >- The MCS container now has two new TLS arguments. The operator will render TLSMinVersion and TLSCiphers via these new arguments in the MCS daemonset manifest. The MCS will attempt to create a http server with these new TLS settings while starting up. > >During cluster bootstrap: >- The bootstrap MCC will default to the intermediate security profile, unless an install time APIServer manifest is provided. This is necessary during bootstrap, as the kube-rbac-proxy-crio pod manifests show up in rendered MachineConfigs and we need the bootstrap MachineConfigs to match the in cluster MachineConfigs post install. >- The bootstrap MCC has access to all install time manifests provided to the installer, but the bootstrap MCC does not. When the bootstrap MCC determines that an APIServer manifest has been provided, it will write it to bootstrap MCS's directory. >- The bootstrap MCS will now read in the APIServer manifest and launch its http server with the TLS settings defined in the manifest. If no manifest is provided, it will default to the intermediate security profile. > >**- How to verify it** >1. Bring up a cluster with an install time APIServer manifest named "cluster". To test bootstrap behavior, ensure >environment variable `OPENSHIFT_INSTALL_PRESERVE_BOOTSTRAP` is set to true. >3. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. These can be seen as arguments for the kube rbac proxy sidecards on the MCO deployments, as well as encapsulated in MachineConfigs for the kube-rbac-proxy-crio pods. >4. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >5. You can also log in to the bootstrap node and verify that the MCS bootstrap logs started with the correct TLS settings. >6. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >7. This will cause a new MachineConfig rollout and also cause the MCC, MCD and MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster. > >**Things to note**: > >- It may take a few moments before the operator rolls out the new manifests. If it is done while the MCO is doing a MachineConfig update to the master pool, the manifests won't be updated until the master pool is finished updating. This is because of the way the operator's sync loops are structured. >- The APIServer rejects the Modern profile at the moment, according to the docs. >- If no tlsSecurityProfile is provided, the MCO will default to Intermediate. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
djoshy commented 2 weeks ago

/retest

openshift-ci-robot commented 2 weeks ago

@djoshy: This pull request references MCO-1202 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to [this](https://github.com/openshift/machine-config-operator/pull/4435): >**- What I did** > >In regular cluster operation: >- The operator and controller will listen on an APIServer object, which contains the [global cluster tls settings](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >- The operator will now render TLSMinVersion and TLSCiphers to the MCC and MCD kube-rbac-proxy sidecar manifests via templated arguments. >- The template controller will now render TLSMinVersion and TLSCiphers to the kube-rbac-proxy-crio metrics pods via templated arguments. These are per node pods in the MCO namespace, and deployed via MachineConfigs. As a result changing the tls settings will cause a MachineConfig rollout. >- The MCS container now has two new TLS arguments. The operator will render TLSMinVersion and TLSCiphers via these new arguments in the MCS daemonset manifest. The MCS will attempt to create a http server with these new TLS settings while starting up. > >During cluster bootstrap: >- The bootstrap MCC will default to the intermediate security profile, unless an install time APIServer manifest is provided. This is necessary during bootstrap, as the kube-rbac-proxy-crio pod manifests show up in rendered MachineConfigs and we need the bootstrap MachineConfigs to match the in cluster MachineConfigs post install. >- The bootstrap MCC has access to all install time manifests provided to the installer, but the bootstrap MCS does not. When the bootstrap MCC determines that an APIServer manifest has been provided, it will write it to bootstrap MCS's directory. >- The bootstrap MCS will now read in the APIServer manifest and launch its http server with the TLS settings defined in the manifest. If no manifest is provided, it will default to the intermediate security profile. > >**- How to verify it** >1. Bring up a cluster with an install time APIServer manifest named "cluster". To test bootstrap behavior, ensure >environment variable `OPENSHIFT_INSTALL_PRESERVE_BOOTSTRAP` is set to true. >3. Once the install is complete, observe the kube rbac proxy manifests in cluster to ensure they have the correct TLS configuration. These can be seen as arguments for the kube rbac proxy sidecards on the MCO deployments, as well as encapsulated in MachineConfigs for the kube-rbac-proxy-crio pods. >4. The MCS daemonset will log the current TLS settings at startup. You can also test the MCS endpoint using the method described in this [comment](https://issues.redhat.com/browse/MCO-1204?focusedId=25007192&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-25007192). >5. You can also log in to the bootstrap node and verify that the MCS bootstrap logs started with the correct TLS settings. >6. Switch between TLS profiles by [following the documentation](https://docs.openshift.com/container-platform/4.15/security/tls-security-profiles.html#tls-profiles-kubernetes-configuring_tls-security-profiles). >7. This will cause a new MachineConfig rollout and also cause the MCC, MCD and MCS pods to restart. You can verify (2) and (3) is as expected as the update rolls through the cluster. > >**Things to note**: > >- It may take a few moments before the operator rolls out the new manifests. If it is done while the MCO is doing a MachineConfig update to the master pool, the manifests won't be updated until the master pool is finished updating. This is because of the way the operator's sync loops are structured. >- The APIServer rejects the Modern profile at the moment, according to the docs. >- If no tlsSecurityProfile is provided, the MCO will default to Intermediate. > Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=openshift%2Fmachine-config-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci[bot] commented 2 weeks ago

@djoshy: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository. I understand the commands that are listed [here](https://go.k8s.io/bot-commands).
openshift-ci[bot] commented 1 week ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/openshift/machine-config-operator/blob/master/OWNERS)~~ [djoshy,yuqi-zhang] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment