kubernetes-sigs / kubectl-validate

Apache License 2.0
145 stars 35 forks source link

Better support for Kustomize #96

Closed devantler closed 1 month ago

devantler commented 6 months ago

What would you like to be added?

I am unable to make it work with Kustomize when running it against my running cluster. I would expect the validation to use the CRD from the cluster, but it seems there are no OpenAPI spec for Kustomize yet.

image

Furthermore, it would be neat with some way to default to whitelisting/ignoring patches (maybe be checking the file name), such that patches do not fail because they do not follow the full spec. For example, this patch that adds a GHCR secret to the Flux HelmRelease, and allows it to use the host network:

apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: gha-runner-scale-set
  namespace: gha-runner-scale-set
spec:
  values:
    template:
      spec:
        imagePullSecrets:
          - name: ghcr-auth
        hostNetwork: true
image

Why is this needed?

I would expect Kustomize to work out-of-the-box, or with little configuration, as it is quite commonly used in GitOps to deploy components, and to patch some manifest.

alexzielenski commented 6 months ago

kustomize includes a command to render the YAMLs. Workflow would be this:

kustomize build  <input_folder> -o <rendered_output>
kubectl-validate <rendered_output>

Given it is a single step in CI workflow Im not sure it makes sense to introduce a dependency for this

Note kustomize renders into a single output file, so you may like to split them using yq: (source) Basic example naming the files after metadata.name

kustomize build | yq --split-exp '.metadata.name + ".yaml"' --no-doc
alexzielenski commented 6 months ago

Furthermore, it would be neat with some way to default to whitelisting/ignoring patches (maybe be checking the file name), such that patches do not fail because they do not follow the full spec. For example, this patch that adds a GHCR secret to the Flux HelmRelease, and allows it to use the host network:

You can use overlay-schemas to provide a patch schema to inject x-kubernetes-preserve-unknown-fields: true for paths you want to ignore

devantler commented 6 months ago

kustomize includes a command to render the YAMLs. Workflow would be this:

kustomize build  <input_folder> -o <rendered_output>
kubectl-validate <rendered_output>

Given it is a single step in CI workflow Im not sure it makes sense to introduce a dependency for this

Note kustomize renders into a single output file, so you may like to split them using yq: (source) Basic example naming the files after metadata.name

kustomize build | yq --split-exp '.metadata.name + ".yaml"' --no-doc

I'll try this out and let you know when I have gained a feeling of the workflow. I would love the validation tool to work without much fiddling, and in this case, I do feel it would make sense to make a flag that allows the tool to pre-build Kustomize before validating files. This seems like a widespread use case; thus, having to do it beforehand seems counterintuitive. In any case, the error does not indicate the lack of Kustomize build to be the issue, so it is a bit hard to infer that from my perspective. What do you think?

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/kubectl-validate/issues/96#issuecomment-2319592595): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.