kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.15k stars 39.42k forks source link

Overwriting a core type with a CRD causes inconsistent `kubectl explain` and create/client-side validation behaviour #106056

Open raesene opened 2 years ago

raesene commented 2 years ago

What happened?

Creating a CRD with a name of networkpolicies.networking.k8s.io and scope: Cluster appears to replace the in-built networkpolicy type client-side, causing inconsistent behaviour between client and server.

If we have a CRD with the following definition :-

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.networking.k8s.io 
  annotations:
    "api-approved.kubernetes.io": "https://github.com/kubernetes/kubernetes/pull/78458"
spec:
  group: networking.k8s.io
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                name:
                  type: string
  scope: Cluster
  names:
    plural: networkpolicies
    singular: networkpolicy
    kind: NetworkPolicy
    shortNames:
    - netpol
    # categories is a list of grouped resources the custom resource belongs to.
    categories:
    - all

And apply that to a Kubernetes cluster. Then run kubectl explain networkpolicies.networking.k8s.io.spec the following information is returned

KIND:     NetworkPolicy
VERSION:  networking.k8s.io/v1

RESOURCE: spec <Object>

DESCRIPTION:
     <empty>

FIELDS:
   name <string>

and it's possible to create an netpol object in the cluster that looks like this

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: lorem
  namespace: default
spec:
  name: lorem

From speaking to @liggitt it seems that it's not actually overwriting the core type but is being persisted server-side and polluting areas of the API used by kubectl explain and for client-side validation.

What did you expect to happen?

My initial expectation that attempts to create a CRD which attempt to overwrite core Kubernetes types would be rejected, as there's no valid use case for that to be allowed.

How can we reproduce it (as minimally and precisely as possible)?

Anything else we need to know?

No response

Kubernetes version

```console $ kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"} ```

Cloud provider

None

OS version

```console # On Linux: $ cat /etc/os-release NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal $ uname -a Linux DESKTOP-4M2JNS4 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux ```

Install tools

kind

Container runtime (CRI) and and version (if applicable)

N/A

Related plugins (CNI, CSI, ...) and versions (if applicable)

N/A
liggitt commented 2 years ago

it's possible to create an netpol object in the cluster that looks like this

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: lorem
  namespace: default
spec:
  name: lorem

to be clear, what is actually being created is the following (which is accepted by the built-in network policy API):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: lorem
  namespace: default

this is not persisted:

spec:
  name: lorem

kubectl get -o yaml ... confirms that the built-in netpol API is still the one serving endpoints (the spec is the default spec applied by the server when an empty network policy is created):

spec:
  podSelector: {}
  policyTypes:
  - Ingress
liggitt commented 2 years ago

/sig api-machinery

liggitt commented 2 years ago

ideally, when aggregating schemas from built-in and CRD-based APIs, we could avoid adding CRD-based schemas that define group/version/kinds already defined by built-in types.

if we can't prevent that, we should at least order/prioritize the definitions so that kubectl explain and kubectl client-side validation prefer the built-in schemas

fedebongio commented 2 years ago

/assign @jpbetz @cici37 /triage accepted

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/106056#issuecomment-1086318504): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
sftim commented 4 weeks ago

I think this is worth re-opening; it may eventually rot out as not planned (rather than as completed).

See https://github.com/kubernetes/kubernetes/issues/106056#issuecomment-956577268

/reopen

k8s-ci-robot commented 4 weeks ago

@sftim: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/106056#issuecomment-2324608587): >I think this is worth re-opening; it may eventually rot out as not planned (rather than as completed). > >See https://github.com/kubernetes/kubernetes/issues/106056#issuecomment-956577268 > >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
sftim commented 4 weeks ago

/lifecycle stale /remove-lifecycle rotten