Closed erikgb closed 1 month ago
The DCO is fixed, but nothing seems to trigger a re-run.....
@erikgb what OpenShift environment/version are you testing against?
@erikgb what OpenShift environment/version are you testing against?
@sabre1041 The SCC CRD is downloaded from the API server in a OCP 4.15.11 cluster (our target environment). For feature branch pipelines in our provisioning projects we use a k3s cluster with selective OpenShift CRDs added. We haven't found any usable ephemeral OpenShift "brand" clusters to use.
We prefer to avoid configuring our software for OpenShift (more Kubernetes standard resources), but with regards to SCCs we have no choice.
@erikgb what OpenShift environment/version are you testing against?
@sabre1041 The SCC CRD is downloaded from the API server in a OCP 4.15.11 cluster (our target environment). For feature branch pipelines in our provisioning projects we use a k3s cluster with selective OpenShift CRDs added. We haven't found any usable ephemeral OpenShift "brand" clusters to use.
We prefer to avoid configuring our software for OpenShift (more Kubernetes standard resources), but with regards to SCCs we have no choice.
I know that OpenShift is most lenient as it relates to CR/schema for CRD's so its not entirely surprising that you are tuning into these challenges when operating in a standard kubernetes environment. Have you tried to install the chart into a fully supported OpenShift environment and does it work as expected?
Have you tried to install the chart into a fully supported OpenShift environment and does it work as expected?
Yes, as stated in the PR description. But then I am unable to verify changes on feature branches. The CRDs published by OpenShift are obviously wrong and a bug, but I have no influence over what Red Hat does. This PR should be considered as a (harmless) workaround.
The other route is to fix the CRDS so they are more representative of what OpenShift is going to do. You could just remove the faulty requirements section out of the CRDS before loading it since it seems like OpenShift isn't honoring it? The cluster should then behave more like OpenShift and work with charts other then just this one.
A quick fix may be:
curl https://github.com/openshift/api/raw/2c10e58877296b062ee6fc63e7fda1eafe7d1bdc/payload-manifests/crds/0000_03_config-operator_01_securitycontextconstraints.crd.yaml -L \
| yq e '.spec.versions[0].schema.openAPIV3Schema.required = []' - \
| kubectl apply -f -
The other route is to fix the CRDS so they are more representative of what OpenShift is going to do. You could just remove the faulty requirements section out of the CRDS before loading it since it seems like OpenShift isn't honoring it?
Yes, that's an option. Adding even more complexity. But IMO being explicit in the SCCs is better - and also aligned with the faulty CRDs published by OpenShift. The problem with this chart specifically is that it's built for OpenShift (global.openshift: true
). We didn't use this value previously and fixed our OpenShift adaptions using patches/additions. But with the latest release, a problematic initContainer was added - based on a conditional including this property....
Ok. Once the DCO is fixed, then we can merge.
Ok. Once the DCO is fixed, then we can merge.
Thanks @kfox1111! DCO should be OK now.
There seems to be some flakiness in CI. I saw the same job failing on the first run, and appears to be completely unrelated.
We add some OpenShift proprietary resource CRDs to more standard Kubernetes clusters like k3s/kind for feature branch testing. Our target clusters are OpenShift, so we want to enable the
global.openshift = true
in the Helm chart. This currently fails because some of the SCCs in the chart does not set some properties required by the SCC CRD: https://github.com/openshift/api/blob/2c10e58877296b062ee6fc63e7fda1eafe7d1bdc/payload-manifests/crds/0000_03_config-operator_01_securitycontextconstraints.crd.yaml#L351There is something funky here, since the SCCs templated by the chart IS accepted by OpenShift - even if the SCC is missing these mandatory fields. But this appears to be an OpenShift issue, and not something I will try to fix....