kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
11.02k stars 2.25k forks source link

Support for mixins #759

Closed sergey-shambir closed 4 years ago

sergey-shambir commented 5 years ago

Kustomize currently interprets each overlay as a full set of resources and patches: each patch can modify only resource which is listed directly in resources: or indirectly within bases: This means that collecting resource and patch group together to use them later is impossible. Currently I can create following overlay hierarchy:

  1. Overlay base without any bases: defines cluster skeleton with services and pods
  2. Overlay base_debug which inherits base and enables debug tools included into containers (these debug tools disabled by default to simplify using the same container on production and in test environment)
  3. Overlay base_debug_aws which adds AWS configs and secrets for services
  4. Overlay base_debug_aws_scale_hard which adds a lot of replicas for each service to test horizontal scaling
  5. Final overlay test_develop which contains configuration for concrete test environment available at concrete domain

This looks like inheritance in OOP (with multiple inheritance when you have multiple bases).

Of course, I can mix changes from steps 2-4 into base or final overlays. I also can create several bases (one for skeleton and a few bases for things like secrets/configmaps) and maintain patches in place from which multiple overlays can re-use this patch.

But from my point of view, it's better to allow mixins: overlays that contains patches for resources that overlay doesn't have. Like this:

  1. Overlay base defines pods a_deployment.yaml and b_deployment.yaml
  2. Overlay debug defines patch a_deployment_debug.yaml, but includes neither base to it's bases nor a_deployment.yaml to it's resources
  3. Overlay aws adds new resource aws_secret.yaml and new patch b_deployment.yaml
  4. Overlay scale_hard adds patches with a lot of replicas for a_deployment.yaml and b_deployment.yaml
  5. Overlay test_develop combines base, debug, aws, scale_hard in predicted order.

It's possible to change kustomize in (at least) two ways:

  1. Allow overlays included with bases list to patch resources defined in previous base in the same list:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../base # defines pods A and B
  - ../mixins/debug # defines patch for A, but does not add A as resource or `base` as it base kustomization
 - ../mixins/aws # defines patch for B, but does not define B
  1. Add separate mixins: key which adds Mixin, which is overlay which can patch resources that it does not define, and process mixins when all bases are already processed.
sergey-shambir commented 5 years ago

Looks like related to #727

paveq commented 5 years ago

It would be very useful to be able to "compose" an application from individual components, without having to rely on inheritance chain.

As example, application base layer should be able to refer to a database service, which is not part of the application, nor should the application base layer extend it. Instead, the final kustomize layer should be able to mix and match the application with possible different DB sizes / bases.

fentas commented 5 years ago

I really like the idea of a sperate key like mixins: allowing to load a group of patches without the need to reference bases there.

We also would like to have a structure like

.
├── kustomize
│   ├── base
│   │   └── # all the base services / resources
│   ├── overlay
│   │   └── # a collection of base services for different environments (prod/dev/etc.)
│   └── patches
│        └── # different sets of general patches (e.g. high availability changes etc.)
└── playbooks
     └── # <*n physical environments>
          ├── patches
          │   └── # specific patches
          └── # uses an overlay as base, adds specific patches and some general patches

The last part (within playbooks) is kind of a pain because I have to point at each patch/resource within a general patch which has to be maintained duplicate times (over multiple playbooks) instead of pointing just to a collection. Also doing patches this way is not possible right now throwing a security issue (can't going back to dir tree).

edit to get around the security issue a workaround would be to create a symlink pointing back in the file tree.

kid commented 5 years ago

One use case for this would be a multi-tenant system with multiple release channels and resources allocations per tenants.

To do this currently, one would need to create one base for each size/release channel combination.

With mixins, this could be achieved by having one mixin to override images, and another one to set resources

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

dsyer commented 4 years ago

This issue should stay alive. Lack of activity is no measure of interest here. We are waiting for something to actually happen.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

dsyer commented 4 years ago

Bump.

pgpx commented 4 years ago

I created a PR for #1251 that essentially does this - #2168, though I added a new Kind (KustomizationPatch) that works as a 'mixin' from the initial comment in this thread. A better name for KustomizationPatch is needed though!

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

dantman commented 4 years ago

/remove-lifecycle rotten

zishan commented 4 years ago

@pgpx looks like your https://github.com/kubernetes-sigs/kustomize/pull/2168 is related to https://github.com/kubernetes/enhancements/pull/1803

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

Shell32-Natsu commented 4 years ago

Looks like components has solved this issue. https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md

/close

k8s-ci-robot commented 4 years ago

@Shell32-Natsu: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/759#issuecomment-707895588): >Looks like `components` has solved this issue. https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.