Open raesene opened 2 years ago
it's possible to create an netpol object in the cluster that looks like this
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: lorem namespace: default spec: name: lorem
to be clear, what is actually being created is the following (which is accepted by the built-in network policy API):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: lorem
namespace: default
this is not persisted:
spec:
name: lorem
kubectl get -o yaml ...
confirms that the built-in netpol API is still the one serving endpoints (the spec is the default spec applied by the server when an empty network policy is created):
spec:
podSelector: {}
policyTypes:
- Ingress
/sig api-machinery
ideally, when aggregating schemas from built-in and CRD-based APIs, we could avoid adding CRD-based schemas that define group/version/kinds already defined by built-in types.
if we can't prevent that, we should at least order/prioritize the definitions so that kubectl explain
and kubectl client-side validation prefer the built-in schemas
/assign @jpbetz @cici37 /triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
I think this is worth re-opening; it may eventually rot out as not planned (rather than as completed).
See https://github.com/kubernetes/kubernetes/issues/106056#issuecomment-956577268
/reopen
@sftim: Reopened this issue.
/lifecycle stale /remove-lifecycle rotten
What happened?
Creating a CRD with a name of
networkpolicies.networking.k8s.io
andscope: Cluster
appears to replace the in-built networkpolicy type client-side, causing inconsistent behaviour between client and server.If we have a CRD with the following definition :-
And apply that to a Kubernetes cluster. Then run
kubectl explain networkpolicies.networking.k8s.io.spec
the following information is returnedand it's possible to create an netpol object in the cluster that looks like this
From speaking to @liggitt it seems that it's not actually overwriting the core type but is being persisted server-side and polluting areas of the API used by
kubectl explain
and for client-side validation.What did you expect to happen?
My initial expectation that attempts to create a CRD which attempt to overwrite core Kubernetes types would be rejected, as there's no valid use case for that to be allowed.
How can we reproduce it (as minimally and precisely as possible)?
kind create cluster
kubectl explain networkpolicies.networking.k8s.io.spec
and confirm that the new definition from the CRD is returnedAnything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)