Closed ca-scribner closed 2 years ago
This sort of situation is handled by tools like kapp, but the scope of kapp is a different from what lightkube is trying to do so not sure if it maps well to here.
Just tested removing rules: []
(rules
is optional anyway) from aggregate-clusterrole
, and it works fine.
This make sense because if you try to send an empty array, k8s will assume you want to change the content of this attribute and fails.
Closing at is seems there is a workaround.
Sorry, I thought I had responded to this.
Removing rules
does avoid the issue, but this feels like an awkward workaround. rules
is optional, but not prohibited, so a lot of valid yaml manifests using aggregated roles exist with this attribute defined. If lightkube does not handle this implicitly, applying those manifests becomes unstable (for example, .apply()
might work at first on a clean cluster, but then reconciling later and invoking a second .apply()
would raise the conflict).
This also feels like a departure from expected behaviour based on the other similar tools. For example, kubectl
handles this similar case fine:
cat << EOF > aggregator_role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aggregator-clusterrole
labels:
aggregationRule:
clusterRoleSelectors:
- matchExpressions:
- {key: test.com/aggregate-to-view, operator: Exists}
rules: []
EOF
# First time works
kubectl apply -f aggregator_role.yaml
# Second time also works
kubectl apply -f aggregator_role.yaml
lightkube
uses server side apply and you can test yourself that the behaviour of lightkube
is inline with kubectl
when using server side apply:
$ kubectl apply -f cr1.yaml --server-side
clusterrole.rbac.authorization.k8s.io/aggregated-clusterrole serverside-applied
clusterrole.rbac.authorization.k8s.io/aggregate-clusterrole serverside-applied
$ kubectl apply -f cr1.yaml --server-side
clusterrole.rbac.authorization.k8s.io/aggregated-clusterrole serverside-applied
error: Apply failed with 1 conflict: conflict with "clusterrole-aggregation-controller": .rules
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
See also the error text "If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers.".
To me this is the intended behaviour. I'm not sure why the client side apply behave differently, anyway I suggest you open an issue in the kubernetes repository and if they fix the server-side apply, lightkube will automatically take advantage of that.
Is there a way in lightkube to apply an aggregate cluster role that includes an empty
rules: []
withoutforce
ing it? Applying to an aggregate clusterrole with anything inrules
results in a 409 conflict because the control plane maintains the rules list. I'd rather avoid usingforce
so I don't suppress other errors, but can't think of anything else apart from adding some custom logic before calling.apply
to remove therules
entirely.This python snippet demonstrates the issue, generating a 409 conflict on
rules
when a change is applied withoutforce=True
: