openshift / compliance-operator

Operator providing OpenShift cluster compliance checks
Apache License 2.0
110 stars 110 forks source link

Proposal for Kubelet Config Remediation #714

Closed Vincent056 closed 3 years ago

Vincent056 commented 3 years ago

Created a proposal for KubeletConfig Remediation

openshift-ci[bot] commented 3 years ago

Hi @Vincent056. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Vincent056 commented 3 years ago

this looks good! Thanks for the proposal.

One thing I'd like to see also is... what are the requirements for CO to be able to remediate a KC? Can it just remediate a KC in every instance? or do we have some limitations?

For instance, for MachineConfigs, we have the limitation that you need to use a nodeSelector, and the nodeSelector needs to match a specific MachineConfigPool, else we skip creating the remediation as it won't be applicable. Should we have something similar for KCs?

I think we are going to use the similar logic here, but there is one limitation, the KubeletConfig needs machineConfigPoolSelector instead of nodeSelector, we can continue to use nodeSelector, but there can be some situations that some machineConfigPoolSelector is not equal to nodeSelector, ex. machineConfigPoolSelector that contains both worker and master nodes, but nodeSelector here will only choose one of them

JAORMX commented 3 years ago

this looks good! Thanks for the proposal. One thing I'd like to see also is... what are the requirements for CO to be able to remediate a KC? Can it just remediate a KC in every instance? or do we have some limitations? For instance, for MachineConfigs, we have the limitation that you need to use a nodeSelector, and the nodeSelector needs to match a specific MachineConfigPool, else we skip creating the remediation as it won't be applicable. Should we have something similar for KCs?

I think we are going to use the similar logic here, but there is one limitation, the KubeletConfig needs machineConfigPoolSelector instead of nodeSelector, we can continue to use nodeSelector, but there can be some situations that some machineConfigPoolSelector is not equal to nodeSelector, ex. machineConfigPoolSelector that contains both worker and master nodes, but nodeSelector here will only choose one of them

That is fine, if we cannot remediate then we raise a validation error on the ComplianceRemediation object. But we should not introduce machineConfigPoolSelectors into scans or anything of that sort. We should stick to stuff that would work in any kube distro. So, let's stick to nodeSelectors.

Vincent056 commented 3 years ago

this looks good! Thanks for the proposal. One thing I'd like to see also is... what are the requirements for CO to be able to remediate a KC? Can it just remediate a KC in every instance? or do we have some limitations? For instance, for MachineConfigs, we have the limitation that you need to use a nodeSelector, and the nodeSelector needs to match a specific MachineConfigPool, else we skip creating the remediation as it won't be applicable. Should we have something similar for KCs?

I think we are going to use the similar logic here, but there is one limitation, the KubeletConfig needs machineConfigPoolSelector instead of nodeSelector, we can continue to use nodeSelector, but there can be some situations that some machineConfigPoolSelector is not equal to nodeSelector, ex. machineConfigPoolSelector that contains both worker and master nodes, but nodeSelector here will only choose one of them

That is fine, if we cannot remediate then we raise a validation error on the ComplianceRemediation object. But we should not introduce machineConfigPoolSelectors into scans or anything of that sort. We should stick to stuff that would work in any kube distro. So, let's stick to nodeSelectors.

Sounds good!, https://github.com/openshift/compliance-operator/pull/715/files#diff-37d868832c42510177fe527ffee41dc25b205168859c7a1467e2b9f592fb4ed3L306 Now I see this one, so under ideal situation

JAORMX commented 3 years ago

/ok-to-test

Vincent056 commented 3 years ago

/retest

mrogers950 commented 3 years ago

/lgtm /approve

openshift-ci[bot] commented 3 years ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mrogers950, Vincent056

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/openshift/compliance-operator/blob/master/OWNERS)~~ [mrogers950] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment