Open yanniszark opened 3 years ago
It's become clear over time that kustomize is primarily used in a git context.
When that's the case, questions about what went into a build
are moot, and a resources:
field can be globbed or even eliminated as suggested in #3204.
Let's make sure kustomize edit add resources ....
accepts a list.
Also update docs site to describe how to use globs to manage the goal here.
I have to say I am surprised by this issue. I just started learning to use kustomize yesterday and it seemed like it would perfectly fit in my workflow. I have a directory of all my YAML files in git and I simply kubectl apply -f myyaml/
on the directory to recursively apply all the YAML there, and kustomize in kubectl seems exactly like a drop-in alternative kubectl apply -k myyaml/
but with the benefit of consolidating and simplifying my YAML management.
It seemed so intuitive and obvious that
bases:
- ../baseyaml
should apply all the YAML in that directory, just like kubectl apply -f
, that I was very confused about why you would need to explicitly specify any .yml files as resources in kustomization.yml in the base directory since this would be redundant. This is in all the examples but the documentation does not explain why. I thought you needed to specify a resource in the base layer only if you want to allow a higher layer to overwrite those values, and that otherwise it would be "read only" (but obviously pulled in by default to the upper layers) because I could not think of any other way this would make sense. It was so counterintuitive it took awhile to figure out how mislead I was.
So, now I download yet another tool (kustomize) , dig up an ancient version to avoid https://github.com/kubernetes-sigs/kustomize/issues/1342 so it doesn't break kubectl apply -k
, run another command (kustomize edit add resource .yml) , clean up after it because .yml matches kustomization.yml and it didn't know not to add itself as a resource in kustomization.yml , and then I end up with uncommitted changes made to kustomization.yml in my version control as a result of this extraneous process (so it does not really seem that conducive to a git-based workflow IMHO), when all I wanted was to do what is (from my point of view) obviously implied by specifying a directory as a base in the first place: just use the YAML in that directory. In hindsight I should have just scripted up something dumb like this: https://github.com/kubernetes-sigs/kustomize/issues/119#issuecomment-526234246
kustomize supports the best practice of storing one’s entire configuration in a version control system.
Globbing the local file system for files not explicitly declared in the kustomization file at kustomize build time would violate that goal.
From my point of view I want a way to explicitly declare a directory to use as a base. And when I manage the contents of that directory in version control, it is not at all clear how loading all the YAML there would violate the best practice of storing one’s entire configuration in a version control system (in fact precisely the opposite, it seems to closely conform to that goal). It is my responsibility to make sure I have the right YAML contents in my version control , not kustomize's responsibility. kustomize should not have opinions about how I specify or manage my files; it should do what I say , like any tool. If I accidentally break my deployment I will blame myself but if kustomize doesn't let me easily manage and apply YAML maybe I should blame kustomize :) Well I won't blame it , I made some incorrect assumptions and my intuition and expectations may have been misguided, but I hope it may be productive to describe my initial (maybe naive) user experience and perspective running into this issue. Thank you!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Being unable to use wildcard in "resources" field in kustomization.yaml adds an extra burden when trying to implement GitOps principles.
Imagine you want your users to be able to simply add a manifest to the repository and this way the manifest gets automatically deployed in the environment. That's great! And simple.
Now imagine this procedure has changed and you must tell this same user to edit some file called "kustomization.yaml" (that he doesn't know what it is) and to add a new entry pointing to this new manifest he has just added. Or better, you tell him to download the kustomize CLI (he still doesn't know what it is) to do this step in a "simpler" manner. Or you integrate it to your pipeline by adding a hook to automatically make a new commit updating just the "kustomization.yaml" with the new entries.
In my view it adds extra unnecessary complexity and introduces new "points of failure" to the whole gitops flow.
I have a usecase where I control everything using flux and kustomize files, but deep down in the structure (cluster/namespaces/core/....) is a folder which is a git submodule without a kustomization.yaml file. I want that folder there for structural purpose, however, I can't include it without needing to look at the files in the parents kustomization.yaml file from time to time.
It would be better if I could glob-import them
@KnVerey @pwittrock Don't you guys think this basic obvious requirement, should be prioritised?
What the hell is wrong with Kustomize maintainers. Lacking this basic feature is ridiculous, Open this issue up.
@arash-bizcover I understand the frustration but we are 3 people with jobs and competing priorities outside of kustomize - we are addressing issues as quickly as we can. You are welcome to submit a PR or bring this issue up at a SIG-CLI meeting if you want more traction on this.
The idea is similar to having a /etc/something/conf.d/
directory, which is a widely adopted design pattern where you can just drop any file in in order to include a modular piece of configuration.
Expressing interest in the feature - in a respectful and supportive way - may be helpful, but perhaps it could also help to have an indication whether the maintainers would accept/approve this feature. If so perhaps someone could be forthcoming with a PR.
Personally I am open to the idea as there seems to be a lot of user interest and I can see use cases that prevent kustomize edit add resource /etc/something/conf.d/*
followed by kustomize build
to be a bit of a hassle - especially the cases described by @xeor and @hvitoi above.
However, I would like some more input from @KnVerey and @monopole because such a feature is specifically defined as out of scope for kustomize (https://kubectl.docs.kubernetes.io/faq/kustomize/eschewedfeatures/) and many of the decisions we have made recently have been to avoid these eschewed features - and I'd like some more context behind the decision to firmly disallow globbing.
Notably listed in this issue: https://github.com/kubernetes-sigs/kustomize/issues/3204:
Implicit inclusion violates the benefits of an unambiguous manifest declaration (see this note on globbing). This is mitigated by always working in the context of a git repository (so you can always know how or if what you have deviates from the repository contents).
So another option may be to enable globbing only when working in a git repository.
@KnVerey @monopole Do these use cases seem compelling enough to make an exception, perhaps as a configurable option?
yes this usecases are compelling enough to make an exception in my opinion :) this feature would be great because in a cd/cd pipeline view you want to add a new resource automated, to edit the kustomization.yaml too with every new file is more complicated then just pushing a new file in my opinion
After careful consideration of the input on these issues, I'm inclined to agree that limited glob support in the resources field could provide concrete benefits and carry reasonably minimal risk in the use cases we should be optimizing for, i.e. settings in which GitOps best practices are being followed.
That said, I do think that Kustomize should continue to favour being extremely explicit in general. If we implement globbing, we should tightly scope this feature and update the eschewed features document accordingly. For instance, I personally think #3204 should still be out of bounds as inadequately explicit to be a fit for Kustomize, and I would prefer a smaller, less flexible feature over a more powerful but less predictable one.
We have a brand new process for aligning on features that are large and/or particularly contentious before time is spent on implementation: a mini in-repo KEP. If someone following this issue still feels strongly about it and is interested in contributing an implementation, please create an in-repo KEP with a more detailed proposal around what globbing will look like, how it will work, and what the boundaries of the functionality will be.
For example, please include:
resources:
field? What about transformers:
for example?Per my strong preference for a tightly scoped feature, my default stance on most of the above is "should not be supported", but I'm open to having my mind changed by concrete use cases in the KEP.
resources:
- api/spring-boot-api-*-custom-resource.yaml
- api/go-gin-gonic-api-*-custom-resource.yaml
which results in all .yaml files in this naming pattern picked up without changing the kustomization.yaml
resources:
- api/*/*/spring-boot-api-*-custom-resource.yaml
- api/*/*/go-gin-gonic-api-*-custom-resource.yaml
/remove-kind design
/kind feature
kind/design
is migrated to kind/feature
, see https://github.com/kubernetes/community/issues/6144 for more details
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Please accept wild cards on resources. If we have 200 resources on a folder that we want to load we need to specify it on the Kustomize.yaml which duplicates file names all over.
One reason that it is useful is to avoid git conflicts on kustomization.yaml
. Conflict handling is usually a manual process. CI/CD pipelines that modify the kustomization.yaml
file will not be able to do that. kustomize edit add resources ...
will not handle this use-case.
I came here looking for some globbing support. I understand that globbing can have it's own pitfalls. I think it's a bit counterintuitive to not even support resources as a folder. I mean if bases
support specifying a folder like ../../base
, then why can't we simply have resources: ['somefolder']
so we don't have to explicitly call out every single YAML file? If there are other unsupported files, then ignore them and leave it up to the user to clean that up. But it seems like a logical request, no?
While I'd love a native solution to this, instead I created a little Python wrapper around kustomization.yaml
that will walk sub-directories and populate every desired file into a kustomization.yaml
, as well as ignore any number of files (if you wish). It's totally 'solved' this problem for me for over a year so I don't have to manually manage and diff a ton of file paths. Ain't nobody got time for that.
https://github.com/DaemonDude23/helmizer
The people want globbing!
Better to say 'People still want globbing!'
Your best practices are not everyone's best practices.
please bring back globbing
What the hell is wrong with Kustomize maintainers. Lacking this basic feature is ridiculous, Open this issue up.
@arash-bizcover I understand the frustration but we are 3 people with jobs and competing priorities outside of kustomize - we are addressing issues as quickly as we can. You are welcome to submit a PR or bring this issue up at a SIG-CLI meeting if you want more traction on this.
I get that the tone wasn't very kind. But wasn't this already implemented once?
Why was this even removed?
Why does someone's opinion mean others have to put considerable amount of effort into figuring out an alternative to putting a *
.
Might as well remove globbing from bash...
For a lot of automation it is much simpler and faster to just drop a file in place instead of having to drop the file in place and update the kustomization.yaml, which has much greater potential to go wrong.
What is the current stance on this? Is there opposition to adding globbing, or is the problem that no one has time to add it?
Can we please reconsider adding globbing? Don't understand the concern when you can just not use it if you don't want to. Alot of flux and argo users would love to see this feature supported
Globbing would be nice, but if you are using flux you already have the globbing, just use kustimize of flux (their crd) instead if kustomization. It has the same features and more.
Hi 👋🏼 Adding my comment for support for this feature. My kustomize.yaml
file is a Flux Kustomization that applies further resources using their kustomize controller.
It would be nice instead of
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./namespace.yaml
- ./wallace/kustomize.yaml
- ./gromit/kustomize.yaml
- ./were-rabbit/kustomize.yaml
To do this instead
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./namespace.yaml
- ./*/kustomize.yaml
The issue to remove it was considered in https://github.com/kubernetes-sigs/kustomize/issues/217
@monopole mentioned this on his comment here but I do not understand how Java (a programming language) has any correlation to a tool used to build Kubernetes resources. Should globbing be removed from bash and other scripting/programming languages too?
It seems like the decision to remove globbing support was made 5 years ago, @monopole @Liujingfang1 haven't contributed on this project in over a year, are they still contributing to this project anymore? If not maybe we can get some fresh eyes on getting support to add this back.
I am just trying to offer up some constructive criticism as we all have voiced our reasons for adding this back but it falls on deaf ears from the maintainers.
Thanks!
wouldn't this situation be typical for an optional feature flag? default to off, but possible for those who want to use it. --enable-globbing
Helm generator is also not active at default.
I have another simple use case for this feature... download/compile Grafana jsonnet dashboards and generate configmap for them. It would be so nice to allow kustomize to iterate through folders and pick all *.json files I think this could be worked around by creating the kustomization file by script to add the needed files, but then I might as well just generate the configmap via the script 🤓
Maybe some of you, are only looking for this command as a temporary workaround
kustomize create --autodetect --recursive
Flags:
--autodetect Search for kubernetes resources in the current directory to be added to the kustomization file.
--recursive Enable recursive directory searching for resource auto-detection.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I second what @addisonautomates said. I'm using argo and I just want to add/remove application resources from a particular folder without having to list them explicitly in the kustomize file. Adding/removing those additional yamls is conflict free on multiple pipelines while updating a single file potentially leads to conflicts, not to mention its more work overall.
This stubbornness of yours is despicable. Like devoted fanatics: "we said NO and the mob can yell all they want" (old issue: https://github.com/kubernetes-sigs/kustomize/issues/119)
I'll throw my voice into this, it seems an oversight, where everything is K8s is about scalability, having to manually maintain a list, feels wrong.
Saying that, I think the language of 1 or 2 posters is appalling - pissing off and alienating the very dev's we need help from is, socially inept.
Java, Go, or bazel require explicitly setting files
Sure, but these all fail at compile time if I miss them. And they all have IDE support to auto-import so I don't have to worry about it. Kustomize fails both of these criteria, allowing a huge footgun where a missing file (say an entry in a configmap) isn't caught until runtime (and maybe not even promptly).
There's a hard truth that needs to be faced here: customer's are going to enable globbing through their own mechanisms if you don't provide it directly, and those mechanisms are going to lead to a more fragile system.
Heck, you allow users to do it when creating something via the CLI*. Knowing that people are going to build their own mechanism leaves people to either write genrules, and/or unit tests that ensure the files match the globbed set. In either case user's are going to be left with a worse off solution than if you simply enabled this.
Why handicap users in such a way? My philosophy is to provide guardrails for the happy-path, but never disempower your users.
I'm interested in helping out with this issue and contributing to the project. However, I'm currently a bit unsure about the best direction to take. I've taken a look at the problem and its context, but I could use some guidance on how to approach it effectively.
If anyone could provide some insights or suggestions on how to get started, it would be greatly appreciated. I'm eager to learn and collaborate to find the best solution for this issue.
Looking forward to your feedback!
I've personally never seen a project actively block their users from an extremely popular feature that was once included, and is still greatly desired.
It's likely a matter of time before this is picked up again, adding it behind a feature flag is an ingenious idea. Moreover, the two people associated with removing this feature @monopole and @Liujingfang1 have pretty much disappeared from this project and the Kubernetes community in general so in my opinion it seems like their opinions don't carry any weight. It should go without saying but their other contributions here should not go unnoticed.
Commenting here again, couple of years later after opening the issue, because I see many posters have missed the following:
If you, or more importantly, your company needs this, consider investing the effort to shape the tool you depend on. There's a pretty clear path to start (https://github.com/kubernetes-sigs/kustomize/issues/3205#issuecomment-909596342) and a direct way to interact with and have the ear of maintainers (the SIG-CLI meeting).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Commenting here again, couple of years later after opening the issue, because I see many posters have missed the following:
Project maintainers (@monopole, @KnVerey, @natasha41575) have been positive towards such a change in this very issue:
- The reason why it's not happening, from this comment, is because of maintainer overload. There seems to be no bandwidth to implement it.
If you, or more importantly, your company needs this, consider investing the effort to shape the tool you depend on. There's a pretty clear path to start (#3205 (comment)) and a direct way to interact with and have the ear of maintainers (the SIG-CLI meeting).
This issue has been repeatedly raised again and again. I tried to follow the story from various discussion threads.
Despite all the requests have been made, project maintainers' latest replies I can found seem still being conservative, which the rationale behind I hardly agree with.
Especially they recognize kustomize usecase is with version control system, where all those files in the globbing scope are already statically exists. Paying extra effort to list out every single entry has no value added on top of this design.
The reason why it's not happening, from https://github.com/kubernetes-sigs/kustomize/issues/3205#issuecomment-902850096, is because of maintainer overload. There seems to be no bandwidth to implement it.
I don't understand what's there to implement. The code was there and was removed so isn't it just a matter of reverting https://github.com/kubernetes-sigs/kustomize/pull/219 and resolving conflicts if necessary? The only implementation but would be gating this behind a feature flag if this was/is the condition to restore globbing.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This is basically the same as https://github.com/kubernetes-sigs/kustomize/issues/119. Reopening, as the issue was closed. The reason for reopening is that we (Arrikto) believe that the reasons given for eschewing this feature don't really apply.
It's also interesting that other use-cases mandating glob-like use have come up from the Kustomize team (although I feel the particular one could benefit from more explicit globbing): https://github.com/kubernetes-sigs/kustomize/issues/3204
Is your feature request related to a problem? Please describe.
I want to automatically pick up new kustomizations inside a folder. For example, we have a structure like this:
and I want to have a kustomization file that automatically imports everything under
user-resources
:Describe the solution you'd like
Kustomize should support globbing in resources. This feature is eschewed right now and the given reason is:
I'd like to elaborate why we are not very clear on those arguments.
This is not clear to us. Storing the entire configuration in a VCS means that checking out a specific commit and running
kustomize build
should produce the same result everytime. That's whykustomize
has explicitly avoided CLI arguments and environment variables. But in the case of globs, the result of globbing the filesystem of a specific commit will always be the same. There is no hidden state here.The problem mentioned in the referenced blog post is that when importing with globs, name collisions may occur. The example given in the blog post is that if one writes:
and the
List
class is declared by two imports, Java will silently keep the one declared later in the code.However, this doesn't really apply to kustomize, because:
java.util.List
andjava.awt.List
, then it's immediately clear there is a collision, so catastrophe avoided. However, if one importsa/deployment.yaml
andb/deployment.yaml
, it's not really clear if there is a resource collision.Thus, we believe this drawback doesn't really exist for a tool like
kustomize
.Describe alternatives you've considered
Kustomize provides the alternative of CLI commands that support globbing and can refresh the kustomizations. However, this imposes an additional burden on the user, to track every single place where this must occur.
I'd love to hear your thoughts on this @monopole @Liujingfang1 @Shell32-Natsu Also cc'ing the author of the original issue @ahmetb