Closed fgunited closed 1 year ago
When a constraint's enforcementAction
is set to warn
it does not block creation but should have returned a warning if there's a violation. Audit will flag a resource in the cluster if there's a violation. You do not see any warnings when creating the ingress resource? See more details here: https://open-policy-agent.github.io/gatekeeper/website/docs/violations#warn-enforcement-action
In the above example it is correct that I do not get any warning, nevertheless the audit-service sees it different. For example I have not set "servicePort" but "service.port", still the audit-service complains about "servicePort".
If I add real violations it works as designed with getting the warning on ingress-creation.
what is the whole output of kubectl version
? k8s supporting warn
is relatively recent, so that could be the issue.
That would be interesting:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.5-eks-bc4871b", GitCommit:"5236faf39f1b7a7dabea8df12726f25608131aa9", GitTreeState:"clean", BuildDate:"2021-10-29T23:32:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
It looks like "warn" was introduced in k8s 1.19, so it should be sending a warning: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#response
@ritazh Do you know how the warning is supposed to appear in kubectl?
@fgunited
Are you saying you only see the warning text when there are other violations, such that the request is rejected? I wonder if that is a kubectl verbosity setting?
Audit should detect all violations regardless of enforcement action, though it should list the enforcementAction of the violation as "warn".
@maxsmythe In my above example the rego-rule (Check if servicePort is set) should not be successful and so should not trigger any warning:
@fgunited Can you get the entire output of the running “example”ingress from the cluster?
@ritazh Sure:
$ kubectl get ingress example -n test -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"example","namespace":"test"},"spec":{"ingressClassName":"nginx-example","rules":[{"host":"example.com","http":{"paths":[{"backend":{"service":{"name":"example","port":{"number":8080}}},"path":"/","pathType":"ImplementationSpecific"}]}}],"tls":[{"hosts":["example.com"]}]}}
creationTimestamp: "2022-02-23T15:54:36Z"
generation: 1
name: example
namespace: test
resourceVersion: "286062498"
uid: 5aaefd1c-cb68-4f40-9f8a-3743eb167be5
spec:
ingressClassName: nginx-example
rules:
- host: example.com
http:
paths:
- backend:
service:
name: example
port:
number: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- example.com
status:
loadBalancer: {}
Thanks!
The audit violation is being thrown against the v1beta1 version of Ingress:
The v1
version of ingress (which is what you're kubectl-applying) doesn't have a field called servicePort
(aside: it looks like this constraint template is written against the v1beta1
version of Ingress), so the constraint template doesn't match, and no warning is raised.
During audit, we audit all possible versions of an object, so we do receive the v1beta1 representation, and a violation is raised.
That's interesting, this means, I've no chance to see via a describe of a constraint, if an object was created initially in a new or deprecated api-version, as all api version representations will be checked.
Correct?
Correct.
Kubernetes generally doesn't care which representation version was used to create an object, beyond the fact that different versions may have different defaults for certain fields.
Here is the K8s documentation on backwards compatibility between versions:
And the CRD documentation has some info about how the API server handles versions:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.
What steps did you take and what happened:
The audit-service give me a status in the constraint, that does not match with the original through the k8s webhook given response when creating a ressource.
I've Created a constrainttemplate and a constraint:
Successfully created following Ingress without any warning:
But the audit-service created following status in the constraint-object after some while:
What did you expect to happen:
I expected that no violation will be shown afterwards in the constraint-status, as it was when creating the resource.
Anything else you would like to add:
Seems that the audit gets some other data for contraints-checking compared to the webhook.
Environment:
Gatekeeper version: 3.7.1
Kubernetes version: (use kubectl version): 1.21.5