kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
10.96k stars 2.24k forks source link

Support composition in kustomize #1251

Closed apyrgio closed 4 years ago

apyrgio commented 5 years ago

We (@arrikto) have been using kustomize lately in order to deploy our software on Kubernetes, using a single base and multiple overlays for each deployment site. This is the typical inheritance scenario that kustomize supports.

As the number of deployments grow, however, so do the things that are common between them. For example, consider the following deployments:

This a composition scenario, where common configuration logic is put into components for reusability. If one wants to achieve it with kustomize, they would probably create intermediate overlays for each component, and multi-base overlays for each deployment, e.g.:

                    +--------------+
                    |              |
       +----------->+  Component1  +----------+
       |            |              |          |
+------+-------+    +--------------+      +---v----+
|              |                          |        |
|  Deployment  |                          |  Base  |
|              |                          |        |
+------+-------+    +--------------+      +---^----+
       |            |              |          |
       +----------->+  Component2  +----------+
                    |              |
                    +--------------+

Unfortunately, this does not work, since kustomize ends up duplicating the YAMLs of the base. We have an example topology here that showcases this error:

(NOTE: You need kustomize v2.1 to run the examples)

$ kustomize build github.com/ioandr/kustomize-example//diamond/overlays/aggregate
`Error: accumulating resources: recursed merging from path '../overlay2': may not add resource with an already registered id: extensions_v1beta1_Deployment|~X|my-deployment`

If we change our components to not point to the base:


                           +--------------+
                           |              |
              +----------->+  Component1  |
              |            |              |
              |            +--------------+
              |
       +------+-------+       +--------+
       |              |       |        |
       |  Deployment  +------>+  Base  |
       |              |       |        |
       +------+-------+       +--------+
              |
              |            +--------------+
              |            |              |
              +----------->+  Component2  |
                           |              |
                           +--------------+

This works for creating new resources, but fails for patches, since patches must refer to resources that are defined in one of their bases. We have an example topology here that showcases this error:

$ kustomize build github.com/ioandr/kustomize-example//mixin/overlays/aggregate
Error: accumulating resources: recursed accumulation of path '../overlay1': failed to find target for patch extensions_v1beta1_Deployment|my-deployment

It seems that currently there's no better way than duplicating code across overlays, which is not maintainable in the long run. Still, we strongly believe that some form of composition support would make kustomize much more usable. This is echoed in other issues too (#759, #942), as well as high-profile projects like Kubeflow, which needed to create a tool on top of kustomize to support composition.

Are there any plans to support composition in kustomize and, if not, does the core team have a better suggestion to tackle this configuration reusability issue?

/cc @monopole @Liujingfang1

jbrette commented 5 years ago

@apyrgio Please have look at: https://github.com/kubernetes-sigs/kustomize/pull/1253

There is still a big issue to address (variable pointing at name) but the behavior seems to come along. When reproducing your setup here, this is the kustomize build output:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: p1-my-deployment
spec:
  template:
    spec:
      containers:
      - image: my-image
        name: my-deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: p2-my-deployment
spec:
  template:
    spec:
      containers:
      - image: my-image
        name: my-deployment

/cc @monopole @Liujingfang1

alexferl commented 5 years ago

@jbrette I saw your pull #1253 yesterday and I tried locally but it didn't fix my issue with the setup that you can see here.

The interesting thing is that if you comment out line 6 (- frontend) in base/kustomization.yaml and development/kustomization.yaml you actually get the expected output for the backend deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    env: dev
  name: my-backend
spec:
  replicas: 2
  selector:
    matchLabels:
      env: dev
  template:
    metadata:
      labels:
        env: dev
    spec:
      containers:
      - image: my-backend-image
        name: my-backend

It's really when you add another resource that things go wrong.

apyrgio commented 5 years ago

Hi @jbrette, thanks for taking the time to look into this issue.

I've checked out your PR and built kustomize from it. With your changes, our diamond example builds correctly. However, when we add patches to the mix, I'm afraid it fails like before. You can see this in our updated diamond example, where the two overlays patch the same base resource:

$ kustomize build github.com/ioandr/kustomize-example//diamond-with-patches/overlays/aggregate
Error: accumulating resources: recursed merging from path '../overlay2': may not add resource with an already registered id: extensions_v1beta1_Deployment|~X|my-deployment

The interesting thing here is that if the same two patches were defined in a single overlay, kustomize would operate as expected. This makes me believe that the error we see here is a side-effect of how kustomize builds YAMLs when the load paths diverge, and not the intended behavior. Is this the case, or am I missing something?

Also, I think that the diamond example in your repo refers to a different use case (see here). We explicitly don't use namePrefix in our overlays, because the point of the composition scenario is to deploy a single app with N patches, not N apps with 1 patch each. So, if you're ok with this distinction, let's experiment on examples with no namePrefix, to make this issue simpler.

Liujingfang1 commented 5 years ago

Kustomize expects different overlays of one common base are for different apps. Then in the aggregated overlay, you have multiple apps. Adding nameprefix, namesuffix or namespace in overlays are to make them for different apps. @apyrgio, if you want to keep single app, you can apply patches in the same kustomization, rather than using multiple overalys.

apyrgio commented 5 years ago

Thanks for the clarifications @Liujingfang1. So, if I understand correctly, multi-base overlays are used to duplicate the resources of a base, either via name prefixes/suffixes or namespaces, to support this use case:

                +------+
                |      |
       +--------> App1 +-------+
       |        |      |       |
+-----------+   +------+   +---v----+
|           |              |        |
|  Project  |              |  Base  |
|           |              |        |
+-----------+   +------+   +---^----+
       |        |      |       |
       +--------> App2 +-------+
                |      |
                +------+

I can see the merits of this use case, but I find it surprising that it can't co-exist with the use cases presented at this issue.

If it helps, I think that the surprising part is that people use kustomize with the expectation that, given a K8s resource ID, they can apply one or more patches to it. They can do so within an overlay, and they'd expect to do so from different overlays as well. In the examples that I and @admiralobvious have supplied, this can work, e.g., by patching the same resource sequentially, in the order that the patches were defined.

And yes, one can define patches in a single place and refer to them in various kustomizations, using --load_restrictions none. This is a bit permissive flag though, and it doesn't address the issue of generators, which can only be defined in kustomizations.

If there are no serious semantic issues, I'd urge you to reconsider this limitation. I believe it will be helpful for a lot of kustomize users, without breaking existing use cases. If there are though, can you please explain for posterity what they are?

jbrette commented 5 years ago

@admiralobvious @apyrgio Thanks for the additional examples you provided today. I realigned the examples with what you provided:

  1. We removed the prefixes from diamond example, and went back to original example here and it seems to work (with the PR). But we agree that as long as we don't try something like your diamond-withpatch nor mixin it is not really useful. Will attempt to address it later.

  2. It took @admiralobvious examples and changed a little the kustomization.yaml in both development and production. The output seems to be the expected ones (also with the PR) : here.

Please double check. The second set of examples is showing that composition can work to some extend but that structure of the folders and kustomization.yaml is crucial.

paveq commented 5 years ago

It would be super-useful to support mixins / composition.

As example, we currently have structure set up like this:

                                                                 +--------------+
                                                              +--+App Deployment|
                                                              |  +--------------+
                                                              |
                                                              |  +------------+
                                                              +--+MySQL       |
                                                              |  +------------+
                         +----------+                         |
                     +---+app|prod  +<---+                    |  +--------------+
                     |   +----------+    |      +--------+    +--+Redis|cache   |
+--------------+     |                   +------+App base+<---+  +--------------+
|client|A|prod +<----+                   |      +--------+    |
+--------------+         +----------+    |                    |  +--------------+
                     +---+app|dev   +<---+                    +--+Redis|session |
+--------------+     |   +----------+                         |  +--------------+
|client|B|dev  +<----+                                        |
+--------------+     |                                        |  +--------------+
                     |                                        +--+Varnish       |
+--------------+     |                                        |  +--------------+
|client|A|dev  +<----+                                        |
+--------------+                                              |  +--------------+
                                                              |  |Persistence   |
                                                              +--+Layer         |
                                                                 +--------------+

Considering the architecture, I would prefer to move some of components to higher level, and mix-and-match different components (eg. use different kind of MySQL setup, or not use persistence layer each time). But deployment requires variable reference to MySQL hostname, and references don't work unless MySQL is directly under "app-base". With only inheritance approach, despite it looking modular it's still very difficult to do any changes that should not be applied to every instance, unless entire tree is being split.

Persistence layer also contains patches that need to be applied on "app-base", due to them applying to sibling components.

alexferl commented 5 years ago

@jbrette yes, your modifications on my example make it work, even on kustomize 2.1.0. Thank you! Indeed, getting the the right folder/kustomization.yaml structure is crucial and tricky in my opinion. I tried several permutations myself before coming here for help.

One thing I can see that might be less desirable with that method is how long the resources list could get in base/kustomization.yaml. We have tens of apps and each with from 2 to 10+ yaml files so the list will get long pretty quickly.

jbrette commented 5 years ago

@paveq Even so I still working on getting the diamond "import" of files, you don't need it absolutely to achieve a composition and mixin: Have a look at mixin example

Each component comes configured with default ports. Those values are updated according to prod and dev.

/cc @admiralobvious @apyrgio

monopole commented 5 years ago

@paveq, @apyrgio Love the ascii art above!

@jbrette @ioandr Also, appreciate the examples

It would be immensely helpful if someone wrote a test-only PR showing the inputs, the current output, and some commentary about what the desired output should be.

We can then merge it, and when the bug / feature request is fixed, one can edit the test to prove it. See #1279 which updates a test entered in #978

As a start, someone could copy/paste this: pkg/target/diamonds_test.go

The inputs are in YAML, as is the output. The case covered is

//         dev      prod
//             \   /
//            tenants
//          /    |    \
//      kirk   spock  bones
//          \    |    /
//             base

with different examples of patch addressing as you go up the stack.

So - kustomize supports mixins (compositions) in the sense that any kustomization can combine other kustomizations. At the end of the day all resources need unique names. If you send a stream of YAML to k8s with a namesspace-GVKN defined twice, the last one will overwrite the first (maybe you want this, maybe you don't).

One could add another layer on top of the above example combining dev and prod, if one wanted to deploy them in one kustomize build . | kubectl apply -f - command (i.e. send dev and prod to the same cluster).

But for that to work, the kustomizations at dev or prod should add either a namespace, namePrefix or nameSuffix directive to keep dev and prod objects distinct.

Anyway, please lets express more of these desires as tests.

paveq commented 5 years ago

@monopole One thing that would help composition would be overlay Kustomization to be able to define prefix or suffix for base they are using, and be able to request same base multiple times. For example, in my case I'm using Redis two times, but I currently I have to have them in two separate bases as I need different name for them.

As example: syntax could be: kind: Kustomization

resources:
- components/redis
  params:
    nameSuffix: "-cache"
- components/redis
  params:
    nameSuffix: "-session"

With this syntax, it might be possible to pass some other paramaters (eg. pass variables to base?) to base layer in future as well.

I'll attempt build example input I would like to see working, and the result I would like to see.

apyrgio commented 5 years ago

@jbrette: Nice job adapting @paveq's use case into a kustomization example.

I checked out the files and have some minor questions:

  1. What is the kind: Values CRD in the example? Is it a CRD that is solely used to hold arbitrary values?
  2. The dev/prod overlays define some Values resources that are not used by anyone. Are they on purpose there?

Also, one final observation, but @paveq can also weigh in on this. I think his last issue was that moving logic higher up on the hierarchy (in this case, making the persistence layer opt-in) cannot be combined easily with other overlays, when both patch the same resources. Your example side-steps this issue because it does things linearly, but slightly more complex examples would hit this limitation. So, if I understand correctly, your example partially solves @paveq's use case, right? In order to be fully solved, we need some form of composition support higher than the base.

jbrette commented 5 years ago

@monopole Added a go version of the mixin PR Commit

@apyrgio @paveq @admiralobvious @ian-howell

  1. The CRD concept is really important. a. You can have any CRD you want. They acts as catalog to share data across construct. b. The variable acts as reference/pointer to other object. Hence it is very similar to have you can do when you compose complex structure in C++, Java or Go. Kustomize support inlining of entire trees. See PR and Example
  2. The values in the final clients are not used. Kind of confusing. Forgot to remove them.
  3. If you combine the autovar, the inlining of tree and the fact that Values are CRD...you can apply standard kustomize patches at the app/dev and app/prod level on those Values CRD and you will have your optin/optout feature. Will update the example later.
apyrgio commented 5 years ago

As per @monopole's suggestion, we created a PR that shows the current kustomize limitation, with an example that mimics the diamond test that @monopole quoted.

We tried to keep it as simple as possible, but we hope that it adequately describes the functionality that the people interested in this issue want to see in kustomize.

monopole commented 5 years ago

@apyrgio that's great. lets get that in as an example to point to.

can you comment on why you don't want to do this as stacked overlays?

monopole commented 5 years ago

@paveq,

For example, in my case I'm using Redis two times, but I currently I have to have them in two separate bases as I need different name for them.

i think you can use one base, and two overlays that change the name, e.g. https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/target/diamonds_test.go but with name suffix instead of name prefix.

apyrgio commented 5 years ago

@monopole You mean why we can't convert the example from this:

     composite
   /     |     \
probe   dns  restart
   \     |     /
       base

to this?

composite
    |
  probe
    |
   dns
    |
 restart
    |
  base

There are two reasons. First, you can think of the example as a simplified version of:

    prod    dev
   /   \  /    \
probe   dns  restart
   \     |     /
       base

The base layer contains the vanilla app, the middle layer contains some modifications you can apply to it, and the upper layer contains the deployments of the app, each with a different choice of modifications. This example cannot be built with kustomize, if the probe, dns and restart overlays patch the same base. Also, it can't be converted to two stacks, without copying things around (here the dns overlay, in larger projects, a lot of overlays).

The second reason is semantics. If you've isolated some common knobs of an app into components/overlays, how can you enforce a dependency order on them? Why should the probe overlay depend on the dns overlay, given that they are unrelated? What if a second deployment wants a different order?

jbrette commented 5 years ago

@paveq @monopole @apyrgio @admiralobvious @ioandr

Please have a look at the new version of the PR. I think we got it. Your new test is now returning the proper values: check here

ioandr commented 5 years ago

Hi @jbrette,

thanks for your focus on this, we really appreciate your efforts. We believe this is getting really close to the desired result, however we would like to point out certain things.

We recently contributed another PR (#1290) that makes the diamond composition test case slightly harder to solve. With this change, unfortunately, your PR fails to produce the expected output. Here, the reason is that converting a patched resource into a patch itself can re-introduce a changed field.

We’d go one step further though, and argue that any type of merging that happens after the patches have been applied is inherently ambiguous, unless you recreate the patches somehow. A simpler approach may be to traverse the overlays depth-first and make patches that target the same GVKN apply to the same resource, thereby side-stepping any conflicts.

@monopole We would love to hear your opinion on this, and specifically whether this approach is in the right direction.

monopole commented 5 years ago

OK, now that we've got a load of tests we're making progress.

Would it be reasonable to retitle this issue as:

Support patch reuse in kustomize.

Two ways to do so:

Run kustomize build with --load_restrictor none.

This lets paths in the patchesStrategicMerge field reach outside the kustomization root, meaning one can put patch files in some common on-disk location, and share them between other kustomizations.

Downsides:

See TestIssue1251_Patches_ProdVsDev

Use the new transformers field (use plugins).

Like the patchesStrategicMerge field, this lets one reference locally defined transformer plugin configs, but the transformers field also lets one reference remote kustomizations (just like the resources field).

So define three transformer configs representing the three patches, give each one their own kustomization file (making them relocatable), then reference these kustomizations in the transformers field.

See TestIssue1251_Plugins_ProdVsDev

This test uses the builtin PatchJson6902Transformer plugin, with three different instances (configured to add the probe, add the dns policy, add the restart policy).

Plugins were introduced to both as an extension point, and as a means to do this kind of modular reuse.

Plugins are alpha, so the exact mechanics / notation for using them will evolve as we all figure out the best way to use them. But the concept is built into the core of kustomize now - all "native" operations use a builtin plugin now.

monopole commented 5 years ago

forgot to mention that the tests referred to in the previous comment are in https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/target/diamondcomposition_test.go

They have ProdVsDev in their names because they do the example you described above:

    prod    dev
   /   \  /    \
probe   dns  restart
   \     |     /
       base

Thanks for these specific examples.

apyrgio commented 5 years ago

Thank you very much for the prompt reply and the abundance of tests. We tried to dig through the suggested solution and we have some comments below. We don't have much experience with transformers, so please correct us if we've understood something wrong.

As we've mentioned in a previous exchange with @Liujingfang1, we don't see composition as patch reuse. We need to reuse kustomizations as well, because they allow us to do the following in a single component:

1) Specify generators for maps and secrets. 2) Add new resources. 3) Alter component resources using builtin or custom transformers. 4) Define vars

From what we gather, specifying individual transformers is an interesting way to side-step the load restrictions limitation but, unfortunately, they do not cover the above points. For instance, a developer can't define a transformer that will create a configMap and use it on a Deployment. And that's a bummer, because all these things are currently expressed with overlays, and we are so close to making them work.

As for the title, we're definitely not fixated on it. It's just that we wanted an umbrella term for the act of putting everything kustomize supports into a box and using it from various places. If patch reuse was our sole concern, we'd probably be ok with --load_restrictions none for a while.

Does our end goal make more sense now? If so, feel free to change the title to whatever seems more fit. Also, we'd be more than happy to update the test scenario with an overlay that uses a configMapGenerator, but it would unfortunately break some of the new test cases.

monopole commented 5 years ago

TestIssue1251_Plugins_ProdVsDev solves the specific problem (prod, dev, probe, etc). This specific problem was a patch reuse problem, solved with three re-usable kustomizations containing a single jsonpatch transformer. I'd do it that way, and not use the --load_restrictions none method. I was just curious how that approach would look, so i wrote it up and it gave the same answer as the transformers approach. Either way gets the answer you wanted (DNS and Prober in one deployment, DNS and restart in the other, using a common base).

For instance, a developer can't define a transformer that will create a configMap and use it on a Deployment. And that's a bummer,

This test has many examples generated configmaps working with deployments. It's a core feature. Though we'd say ConfigMaps are generated by generators, not transformers, and I'm not sure if you did a typo or meant to use the word transformer.

There are also many examples of kustomization file reuse, as that's the whole point of allowing kustomizations to be specified in the resources: field. If Alice makes a kustomization to deploy minecraft, Bob can use it as a base and adapt it to his needs. But i don't want to have a hand-wavy debate.

we'd be more than happy to update the test scenario with an overlay that uses a configMapGenerator, but it would unfortunately break some of the new test cases.

Yes! Please do; that's ideal.

A passing test that exhibits undesirable behavior is the best way to point out the need for (and get) the support, or alternatively, to have someone point out an alternative approach that doesn't require a new feature.

Composition is a good idea, which is why kustomization supports it. But the devils are in the details, and Bob's view of how composition is achieved might not match Alices'.

You might want to chime in on #1292. It's related, and would be another way to solve this that doesn't work yet. I'd say @jbrette was working in this direction, but it needs a wider scope and some careful thought about backward compatibility.

apyrgio commented 5 years ago

TestIssue1251_Plugins_ProdVsDev solves the specific problem (prod, dev, probe, etc).

This test has many examples generated configmaps working with deployments. It's a core feature. Though we'd say ConfigMaps are generated by generators, not transformers, and I'm not sure if you did a typo or meant to use the word transformer.

Ah, I see now the source of the confusion. I thought you proposed custom transformers as an alternative to the components we are describing in this issue, which is why I pointed out that transformers don't do all the neat things that overlays do. But you suggested them as a solution to the patch reuse problem specifically, which flew over my head.

Yes! Please do; that's ideal.

We have opened another PR with a new test to further clarify our needs and preserve existing test cases intact. Feel free to comment and review.

You might want to chime in on #1292. It's related, and would be another way to solve this that doesn't work yet. I'd say @jbrette was working in this direction, but it needs a wider scope and some careful thought about backward compatibility.

The proposed merge strategy is a solution to the resource ID conflict that we describe at the start of this issue, right?

That's excellent news! This functionality would be a great addition to kustomize. We'll further comment on #1292 for any implementation details. Also, I'd propose keeping this issue open for a while, so kustomize users in need for components/mixins can comment on whether the approach described in #1292 covers their case.

jbrette commented 5 years ago

@apyrgio @ioandr @paveq @admiralobvious

I had a second try at the issue, it does not do everything what it does, it seems to it right: PR

@monopole and @Liujingfang1 could for sure go in more details, but here are some things you need to understand:

The key concept is that currently, every time you put "../base" in your kustomization.yaml, kustomize by design is making a copy of the resources. This is actually really useful in for a lot of use cases, for instance when you want to write something once and "copy/paste" across multiple namespaces or prefixes.

It starts to get tricky when you are trying to do "diamond import" because kustomize currently does not see the resources as coming from the same ancestor. So if your base resources are really slim, almost with no default values...and your upper levels are "adding" to the base resources, the algorithm builtin into kustomize for merging works pretty well. It is also really good at detecting merge conflicts (still because it does not have the concept of common ancestor).

Same thing for variables. variables are kind of global so the algorithm. This algorithm has to be protected against pointing at two different resources. Because you define the variables against the original name of the resource, if you apply a set of suffix/prefix to the resource the variable is pointing at, you can end up having a variable pointing at multiple resources (when doing diamond import).

So to sum up, this PR helps a lot with simple cases when you are using folders to group hundreds of resources. As long as the automatic merge works, you don't need a 10,000 lines kustomization.yaml (when your project gets big). Or if you put common files, variables on your base...and you keep importing them it works fine.

The case we are now trying to address is how to get kustomize to understand that some of the resources have a common ancestor which will help to perform a proper merge without a very complex kustomization.yaml. When you really look at it, you see merge conflict, fork....so we are trying to basically do with K8s resources what git is doing with files. This is the whole point of issue #1292.

This PR is in theory a step in the right direction, keeps the backward compatibility with Kustomize behavior but does not address the "merge conflict" aspect of diamond composition.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

apyrgio commented 5 years ago

There's still a PR regarding this issue and a proposal from @monopole (see

1292) . Also, this issue keeps getting thumbs up at random intervals, which

means that people are still stumbling on it and are interested.

Until we know how (or if) this will get implemented, I'd propose keeping this issue open.

pgpx commented 4 years ago

Could another approach be to allow 'patch' kustomization files that can inject themselves into the resource tree, by effectively rewriting the resource of any kustomization that includes them. This wouldn't require any change to the merge semantics of kustomizations, and would only really need to be a pre-processing step.

We can identify whether a kustomization.yaml file is a 'patch' with a new kind (or other flag) - don't think that we should require clients to always specify that. Whenever such a 'patch' is used in a parent kustomization:

Repeat this using depth-first-recursion!

For example:

deployment1:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- [deployment1-resources-before-patch1]
- patch1
- [deployment1-resources-after-patch1]
[deployment1-configuration]

patch1:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: KustomizationPatch
resources:
- [patch1-resources]
[patch1-configuration]

Then rewrite patch1 to include all of the resources from deployment1 that came before it (and make it a normal kustomization):

patch1_updated:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- [deployment1-resources-before-patch1]
- [patch1-resources]
[patch1-configuration]

And rewrite deployment1 to use patch1_updated instead of those resources and the patch:

deployment1:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- patch1_updated
- [deployment1-resources-after-patch1]
[deployment1-configuration]
+-------------------------------------------+
|                  deployment1              |
+-------------------------------------------+
  |      |      |         |        |      |  
+---+  +---+  +---+  +--------+  +---+  +---+
| a |  | b |  | c |  | patch1 |  | d |  | e |
+---+  +---+  +---+  +--------+  +---+  +---+
                      |      |
                    +---+  +---+
                    | f |  | g |
                    +---+  +---+

Becomes:

+-------------------------------------------+
|            deployment_updated             |
+-------------------------------------------+
                |                  |      |  
   +--------------------------+  +---+  +---+
   |       patch1_updated     |  | d |  | e |
   +--------------------------+  +---+  +---+
    |    |      |      |     |
+---+  +---+  +---+  +---+  +---+
| a |  | b |  | c |  | f |  | g |
+---+  +---+  +---+  +---+  +---+
apyrgio commented 4 years ago

So @pgpx, if I understand correctly, you propose a way for Kustomize to support composition by internally converting a list of overlays (KustomizationPatches):

resources:
  - base
  - component1
  - component2

to a chain of overlays:

component2_patched -> component1_patched -> base

and to do this, you define a new Kustomization kind (KustomizationPatch), and expect that Kustomize will treat it in a special way. Compared to @monopole's proposal (see #1292):

I must say I like this approach, it does what this issue intends to, in a way that most users would find familiar, i.e., apply changes on a base resource sequentially.


I would also like to share our experience regarding @monopole's suggestion for composition (see here), by using custom generators and transformers.

We experimented with an application of ours and factored-out some configuration parts into components, i.e., base-less Kustomization directories that consist of generators, transformers and extra resources. Our experience with this composition approach is the following:

Pros:

Cons:


In a nutshell yes, we technically can create components with this approach. However, it needs some extra (syntactic) sugar before more people can use them. Personally, @pgpx's suggestion ticks most of my boxes as far as UX is concerned, so I'd suggest taking it into consideration.

kunickiaj commented 4 years ago

@apyrgio the example you provided here https://github.com/ioandr/kustomize-example/tree/master/diamond-with-patches is exactly the same use case we have, in case you've found a solution, I'd love to hear it.

While we could layer the different patches as overlays that doesn't really make sense because they're not related and it creates the wrong relationship between them.

In our case we have patches that add support for things like cloudsql proxy, or a secrets server configuration, etc. Things that may be optional in certain environments.

ioandr commented 4 years ago

Hi @kunickiaj,

we have updated our diamond-with-patches example with an alternative approach that defines reusable components. Under the hood, these components leverage PatchTransformers, as @monopole previously suggested.

You can view the updated example here: https://github.com/ioandr/kustomize-example/tree/master/diamond-with-patches-transformers. Feel free to comment or give us any feedback.

Also, note that the transformers-based solution described above requires kustomize v3+. You can see the related discussion in https://github.com/kubernetes-sigs/kustomize/pull/1615#issuecomment-541106765.

kunickiaj commented 4 years ago

Great, I’ll check that out. Thank you!

On Tue, Nov 12, 2019 at 09:05 Ioannis Androulidakis < notifications@github.com> wrote:

Hi @kunickiaj https://github.com/kunickiaj,

we have updated our diamond-with-patches example with an alternative approach that defines reusable components. Under the hood, these components leverage PatchTransformers, as @monopole https://github.com/monopole previously suggested.

You can view the updated example here: https://github.com/ioandr/kustomize-example/tree/master/diamond-with-patches-transformers. Feel free to comment or give us any feedback.

Also, note that the transformers-based solution described above requires kustomize v3+. You can see the related discussion in #1615 (comment) https://github.com/kubernetes-sigs/kustomize/pull/1615#issuecomment-541106765 .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kustomize/issues/1251?email_source=notifications&email_token=AAFWH2SZTOXSIDC5H5GB2T3QTLO4XA5CNFSM4H3H3I32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOED2675Q#issuecomment-552988662, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFWH2UOLMURMHUSVUPKUHLQTLO4XANCNFSM4H3H3I3Q .

pgpx commented 4 years ago

@apyrgio sorry for the very late reply, and thanks for the detailed analysis. Yes, you were correct in your understanding of my idea.

I've just implemented a PoC to implement this solution (see the commits referenced above), and I think that only minimal changes were needed (just allowing the KustTarget of the KustomizationPatch to take ownership of the parent's ResourceAccumulator). I've written a few simple unit tests, but won't be able to test it in anger on my project until next week (I'm on vacation!). I also haven't tested the CLI tools with it.

Does anyone think that it is worth expanding into a full PR? I think that PatchTransformers are complementary to this solution - both have their uses.

apyrgio commented 4 years ago

Sure, I'd find it very interesting and would be welcome to test it. For instance, there is a trivial diamond example in ioandr/kustomize-example, that is already implemented in two ways:

When you have something in a working state, I could test it on this example. Might even give it a spin on our internal kustomization files, which are more complex.

pgpx commented 4 years ago

@apyrgio thanks - I've added some examples that use my KustomizationPatch prototype in my fork of the kustomize-example repo (feature-kustomizationpatch branch) - the kustomize-patch/overlays/aggregate replicates the diamond-style example and also shows how configmaps can be overridden by patches (one of my main uses).

I've also added kustomize-patch-complex that replicates the kind of environments that we currently use: CI, demo, and prod, some of which have a canary variant, some that need a stub server, and some that need ingresses. Without KustomizationPatch I have to duplicate at least some of the configuration.

I've moved the prototype code to https://github.com/pgpx/kustomize/tree/feature-kustomizationpatch (feature-kustomizationpatch branch) (and removed the reference to this issue to avoid spamming this thread with commit notifications!)

apyrgio commented 4 years ago

@pgpx I built kustomize from your repo and tested it on the examples that you provided. I saw the expected results, and verified that the other examples work as well, so that's great. I also took a look at the code and, while I'm not familiar with kustomize's internals, it doesn't seem much invasive.

I only have two minor issues:

  1. The "kustomization patch" phrase usually refers to the JSON/strategic merge patches, so the KustomizationPatch kind may create a bit of confusion. I'd probably go for KustomizationComponent, but I'm a bit biased, since I'm thinking in terms of composition.
  2. When I ran some of the examples, I saw Ingress resources referring to Services that didn't exist, and I instinctively thought that something was broken. This wasn't the case, but it would help if the examples were closer to the real world.

Nits aside, it's nice that with this small change we gain so much in terms of reusability, without the boilerplate that transformers/generators require. If you can send this as a PR and it gets merged, I think that this issue could effectively close, since it would make the following work:


                           +--------------+
                           |              |
              +----------->+  Component1  |
              |            |              |
              |            +--------------+
              |
       +------+-------+       +--------+
       |              |       |        |
       |  Deployment  +------>+  Base  |
       |              |       |        |
       +------+-------+       +--------+
              |
              |            +--------------+
              |            |              |
              +----------->+  Component2  |
                           |              |
                           +--------------+

@monopole: The proposed patch is quite small (~50 lines), but it introduces a new concept. Could you please comment on whether you like this approach?

pgpx commented 4 years ago

@apyrgio Thanks for taking a look. I agree that KustomizationPatch isn't the best name, but maybe the scope of what it allows is a bit more than just components? (e.g. could reuse different independent changes to ConfigMaps to define endpoint URLs) Maybe Mixin (though that's not strictly what is done here either). Oh, and sorry about the misleading test - I think a more comprehensive set of tests would be worthwhile if the idea is progressed in any case, and I'd be happy to submit it as a PR if that's ok.

apyrgio commented 4 years ago

@pgpx: I think that the name of the kustomization kind and the tests are minor, and can be fixed during the review. I'd suggest you send your branch as a PR, so that we can discuss design issues there.

pgpx commented 4 years ago

OK, will do - good idea.

On Wed, 29 Jan 2020 at 09:57, Alex Pyrgiotis notifications@github.com wrote:

@pgpx https://github.com/pgpx: I think that the name of the kustomization kind and the tests are minor, and can be fixed during the review. I'd suggest you send your branch as a PR, so that we can discuss design issues there.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kustomize/issues/1251?email_source=notifications&email_token=AAHZMHFNK5UIFPOLIHPC4T3RAFHJ7A5CNFSM4H3H3I32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKGTQNI#issuecomment-579680309, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHZMHBC7XG55ZB66P62AFDRAFHJ7ANCNFSM4H3H3I3Q .

steinybot commented 4 years ago

Would it also be possible to solve this by changing the way that bases are loaded?

My naive understanding was that bases worked the same way as an "include" such that:

overlays/dev/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: foo-dev
bases:
  - ../../base
  - ../common/node-port
commonLabels:
  variant: dev

base/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - api-service.yaml

overlays/common/node-port/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - foo-service.yaml
patchesJson6902:
  - target:
      version: v1
      kind: Service
      name: api
    path: node-port.yaml

Was equivalent to:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: foo-dev
resources:
  - api-service.yaml
  - foo-service.yaml
patchesJson6902:
  - target:
      version: v1
      kind: Service
      name: api
    path: node-port.yaml
commonLabels:
  variant: dev

This doesn't work right now and fails with:

Error: accumulating resources: recursed accumulation of path '/Users/jason/source/bug-reports/overlays/common/node-port': no matches for OriginalId ~G_v1_Service|~X|api; no matches for CurrentId ~G_v1_Service|~X|api; failed to find unique target for patch ~G_v1_Service|api

It seems to that if bases were loaded in order and could reference resources loaded before them then that would be enough to support composition.

I have a more complete example of what I was trying to do here https://github.com/steinybot/bug-reports/tree/kustomize/patch-service.

apyrgio commented 4 years ago

Right, instructing kustomize to load the resources in order is the gist of @pgpx 's PR. For example in your case, you should be able to convert the overlays/common/node-port overlay to kind: KustomizationPatch, and it should work.

steinybot commented 4 years ago

@apyrgio Is there a reason not to make this new load order for KustomizationPatch the default for Kustomization?

It feels a bit off to me that the kind dictates the load order within the thing that is loading it. This will limit the ways in which things can be composed in that the component has to be aware of the fact that it may be composed. The concern of loading is backwards.

Will I be able to change everything to KustomizationPatch? In other words can I use KustomizationPatch as the kind of the top level kustomization.yaml file? If not then this means that it can't be composed into a larger set of resources.

apyrgio commented 4 years ago

Ah, I see what you mean. Ok, I'll try to take your points one by one and comment on them with my understanding and the authoritative responses we have for now:


Is there a reason not to make this new load order for KustomizationPatch the default for Kustomization?

So, you ask why the resources in this scenario:

resources:
  - ../base
  - overlay # Patches base but does not point to it directly

are handled in parallel (so overlay cannot patch a resource in base) and not in order (which would work and is what KustomizationPatch ends up doing).

It seems that Kustomize has this supported use case, whereby a single base ends up being copied multiple times, with a slightly different namePrefix, nameSuffix or namespace. In order to achieve this, it must handle the resources in parallel by default.

The intention is to deploy multiple variants of the same application with a single command. Personally, I have reservations on whether this is a good pattern, or something that is wildly used. In any case, this is further reinforced by @Liujingfang1's comment:

Kustomize expects different overlays of one common base are for different apps. Then in the aggregated overlay, you have multiple apps. Adding nameprefix, namesuffix or namespace in overlays are to make them for different apps.

So, in a nutshell, Kustomize must keep the default load order to maintain backwards compatibility.


It feels a bit off to me that the kind dictates the load order within the thing that is loading it. This will limit the ways in which things can be composed in that the component has to be aware of the fact that it may be composed. The concern of loading is backwards.

This is a valid concern. In fact, @monopole had proposed roughly what you're saying, that the aggregated overlay should specify how the resources should be merged (see #1292 (comment)). I liked this approach but, unfortunately, it never took off. Still, this is something that we could revisit.

Still, if you think about it, the problematic overlay in the example I mentioned above:

  resources:
    - ../base
->  - overlay # Patches base but does not point to it directly

attempts to patch a resource it has never defined. So, it does know that it's a component of sorts and expects that it will be composed. We could even generalize this by saying that any overlay that transforms something it has not defined itself, is aware that it's a component and must be composed.

Now, there are components that don't transform resources but create/generate them instead. In this case, you're right, they can be used both as components and as base overlays, so specifying a single kind is limiting. I don't have an answer to that I'm afraid. My gut feeling though is that this is a situation that will rarely, if ever, be encountered in actual cases.


Will I be able to change everything to KustomizationPatch? In other words can I use KustomizationPatch as the kind of the top level kustomization.yaml file? If not then this means that it can't be composed into a larger set of resources.

That's a very interesting observation and I think it's something @pgpx should have in mind before their PR is merged. If KustomizationPatch simply dictates that an overlay should be loaded in order, not in parallel, then yes, you should be able to use it anywhere on the stack, even on bases and top-level overlays. If this is not the case, then I'd consider this as an implementation limitation that should be fixable.


Sorry for the long comment but, as you see, there's a bit of a backstory to the points you've raised.

Looking back, this is an issue that is over 9 months old, has a steady influx of comments and upvotes, yet no engagement from the core team for the past four months. That's a pity because, while it's understandable that there are other top-level priorities, this issue is the second most upvoted and the second most commented ever in this repo.

It would help the Kustomize users that stumble onto these limitations, and then on this issue, if there was either an active discussion on how to improve the situation, or a mention of a roadmap item, or a design document, or something actionable in general. Else, the feeling they get is that "this seems like a non-issue for the Kustomize team, I guess I'll have to copy-paste stuff or use another tool".

/cc @monopole @Liujingfang1 @pwittrock

pgpx commented 4 years ago

@apyrgio Good answer - thanks! Regarding using KustomizationPatch at the top-level - it does then just work as a normal Kustomization, and there is even a test case for it (TestApplyingKustomizationPatchDirectlySameAsKustomization).

The basic idea is that a KustomizationPatch is aware/can use any resources defined before it in its parent's list of resources (if any). This would be the same as defining those resources directly in the KustomizationPatch and removing them from its parent, though those resources would then be fixed.

And I agree that any kind of feedback from the Kustomize team would be helpful, even if just to say that they are very busy / have real-world issues.

pwittrock commented 4 years ago

@apyrgio catching up on this issue, will try to respond within a week.

apyrgio commented 4 years ago

@pwittrock that's exciting, thanks for taking a look at this issue :-). BTW, I took the liberty of putting a discussion topic for this feature on sig-cli's agenda, so that we can answer questions more easily. I'm not sure if sig-cli is the proper place for community meetings regarding Kustomize so, if there's another one, please point me to it and I'll move it there.

pwittrock commented 4 years ago

@apyrgio Thanks. That's the right thing to do.

pgpx commented 4 years ago

@apyrgio and @pwittrock thanks for covering this in the sig-cli call - I'm sorry that I missed that. if you are going to create a new documentation-centric PR would you be able to link to it here - I'd like to help if needed? Also, if the Kustomize internals are currently being refactored than it might be worth looking at the pull request implementation because the required changes are minimal (though that would also mean that they could be made later too).

apyrgio commented 4 years ago

@pgpx: Great that you are in sync as well. For the others: I haven't gotten around to this yet, but I plan to summarize what was discussed in the call. As for the doc-centric PR, yes, once I have something ready, I'll send here as well.

apyrgio commented 4 years ago

As I mentioned before, there was a sig-cli meeting where this issue was discussed. You can find the recording here (relevant bits after the 29:00 mark).

The tl;dw is this:

So, given the above, the actionable items are these: