Closed donbowman closed 2 years ago
namespac
overwrites .metadata.namespace
in all resources, it provides easy separation of resources between different environments. For some type of resources, namespace is not needed or users don't want to overwrite the namespace. Thus we need to be able to skip certain types of resources.
Currently, Kustomize skips adding namespace for some hard-coded types, https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/gvk/gvk.go#L154. This need to be extended to allow user specify a skip type.
here it would not be a skip type
, it would be a named entity.
e.g. I might have a setup that installs 1 pod into kube-system and the rest into its own namespace.
I have the same problem and I would like to use bases
to link to dependencies but not change the dependency namespace. @donbowman were you able to get around this issue?
I've just hit the same problem when using a role binding, where the subject is in a different namespace than metadata.name. The subject namespace is being overwritten.
another example. when using istio, i need to be able to have a certificate and ingressgateway in namespace istio-system, but have the rest of the material (including the config generator) in its own.
so e.g. I have:
if i do not set a namespace in kustomization.yaml, I end up w/ the Secret from the secretGenerator in the root namespace, but the deployment that would ref it is not, so it doesn't work. If i do set a namespace in kustomization.yaml, then my ingressgateway and certificate are rewritten to be in the wrong namespace.
it seems there is no way to make this work.
Hello, I've been facing issues with the namespace override also.
Wouldn't it be possible to add an option to the namespace key in kustomization.yaml to specify if we want it to override or not ?
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace:
overwrite: <true | false>
name: <namespace>
Wouldn't it be possible to add an option to the namespace key in kustomization.yaml to specify if we want it to override or not ? As long as it defaults to false to observe current behaviour
For interest, I created a transformer to do this for me:
#!/usr/bin/env /usr/bin/python3
import sys
import yaml
with open(sys.argv[1], "r") as stream:
try:
data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print("Error parsing NamespaceTransformer input", file=sys.stderr)
# See kubectl api-resources --namespaced=false
blacklist = [
"ComponentStatus",
"Namespace",
"Node",
"PersistentVolume",
"MutatingWebhookConfiguration",
"ValidatingWebhookConfiguration",
"CustomResourceDefinition",
"APIService",
"MeshPolicy",
"TokenReview",
"SelfSubjectAccessReview",
"SelfSubjectRulesReview",
"SubjectAccessReview",
"CertificateSigningRequest",
"ClusterIssuer",
"BGPConfiguration",
"ClusterInformation",
"FelixConfiguration",
"GlobalBGPConfig",
"GlobalFelixConfig",
"GlobalNetworkPolicy",
"GlobalNetworkSet",
"HostEndpoint",
"IPPool",
"PodSecurityPolicy",
"NodeMetrics",
"PodSecurityPolicy",
"ClusterRoleBinding",
"ClusterRole",
"ClusterRbacConfig",
"PriorityClass",
"StorageClass",
"VolumeAttachment",
]
try:
for yaml_input in yaml.safe_load_all(sys.stdin):
if yaml_input['kind'] not in blacklist:
if "namespace" not in yaml_input["metadata"]:
yaml_input["metadata"]["namespace"] = data["namespace"]
print("---")
print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)
Can you please describe how you put the transformer in the kustomize build ...
run !
~/.config/kustomize/plugin/agilicus/v1/namespacetransformer/
---
apiVersion: agilicus/v1
kind: NamespaceTransformer
metadata:
name: not-used-ns
namespace: foobar
I need to get around to creating a github repo w/ my generator + transformers.
edit: they are here
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
No solution about this?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
I think it would be reasonable if there were a special annotation that the user could specify to opt an object out of the namespace transformer. This way is also flexible, as it can be paired with patches to mass opt-out resources. @monopole @Shell32-Natsu @donbowman what do you think?
@yanniszark sure, that's one soluton. or my original suggestion about replacing unset ones only, e.g. a weak binding.
@donbowman the reason I suggested another way is because replacing unset ones breaks the existing contract and is prone to mistakes (e.g., someone forgot one resource). The way I describe keeps the contract (assuming no one uses the annotation right now, which is a reasonable assumption IMO) and makes the exclusion explicit.
We can introduce the target
field used in patch transformer to more builtin plugins. @monopole
I have the same issue. When I use kustomize to generate manifest for cert-manager, kustomize overwrite the namespace for cert-manager's rbac resources from kube-system
to the namespace I set which cause a lot of problem.....
just as a heads up for anyone who comes upon this using cert-manager, kube-system is NOT REQUIRED for cert-manager to work. you can just remove the kube-system definitions, and use $(POD_NAMESPACE) in place of kube-system in the command line arguments.
Is there a recommended work-around for this in the mean-time?
@lpil as far as I can tell the recommended workaround is to have a subdirectory with its own kustomization.yaml whose namespace points to what you want it to point to, then use that as a base for your other kustomization.
I've been putting the correct namespace on the resources, and not setting a namespace in the kustomization.yaml files.
I discovered you can set the namespace for a generator, which solves the problem of generated secrets/configmaps not being in the correct namespace:
configMapGenerator:
- name: my-config
namespace: mynamespace
literals:
This comes with the drawback of not being able to deploy the kustomization bundle to any arbitrary namespace, but I think any kustomization bundle that needs to touch multiple namespaces is probably a singleton anyway, so it shouldn't be a big deal.
An additional thing that would be useful would be the ability to set the namespace for individual imported resources, rather than all or nothing.
resources:
- path: foo_app
namespace: foo
- path: bar_app
namespace: bar
@apeschel The issue with setting namespace(s) to the resources is, that you make it static that way. Imagine you have a general purpose application like cache or database, which you need multiple times in the cluster. The best way is to simply create overlays that differ not just by specific configuration, but also by namespace. So every instance of your application lives in it's own namespace.
@johny-mnemonic That's true, but I can't think of many scenarios where you would be using multiple namespaces in a kustomization, and be planning on having multiple copies of that kustomization deloyed. Using multiple namespaces in a kustomization seems to imply that the kustomization should be deployed as a singleton to the cluster.
What about cases like Helm release for kube-prometheus-stack, where services are deployed to kube-system namespace for monitoring, but you can have multiple copies of them, each with unique name prefix. That would be a legitimate use case, e.g. when you want to test updated configuration or new versions. Shouldn't such use case be supported by kustomize?
@apeschel well, for example whenever you deploy app together with the configuration for Prometheus Alert manager you need to deploy those rules to the namespace of Prometheus, while your apps goes to it's own namespace. So in case you want to customize those rules per overlay, while also want to define namespace of the app in the overlay you are in a bit of trouble...
I had the same issue today, my ingress must be in istio-system namespace and uses istio-ingressgateway
as backend, I ended up declaring the ingress multiple times in overlays and removing namespace: ...
from customization files.
In my opinion the default behaviour must be if not set
or at least provide a way to use this
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I just hit this issue with my ingress as well. Is there an ETA for a fix?
FWIW, I've had this problem and the best workaround I could find was to patch over the overridden namespace:
patchesJson6902:
- target:
group: ""
version: v1
kind: ConfigMap
name: my-config-name
patch: |-
- op: replace
path: /metadata/namespace
value: other-namespace
Not great if you've got a ton of resources but works in a pinch for a couple
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I have the same issue, when I have Istio Gateway, VirtualService in one namespace and I have to have certificate in the Istio-ingress namespace. Can't do it right now with Kustomize
I found this issue while trying to figure out why Kustomize wasn't overriding the namespace for a MutatingWebhookConfiguration
resource. I'm using https://github.com/influxdata/telegraf-operator/blob/v1.3.6/deploy/dev.yml, and the namespace for the mutating webhook configuration wasn't being updated by Kustomize. I ended up solving it like this, thanks to benjamin-wright above:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: observability
# For some reason the above `namespace: observability` doesn't update the namespace for the MutatingWebhookConfiguration
# resource, so we do that with a patch:
patchesJson6902:
- target:
group: ""
version: v1
kind: MutatingWebhookConfiguration
name: telegraf-operator
patch: |-
- op: replace
path: /metadata/namespace
value: observability
@stianlagstad unless I am mistaken, MutatingWebhookConfiguration
is a cluster scoped object hence adding a namespace there is incorrect and hence why kustomize ignores it.
@Moulick you're right, I've found an alternative solution: https://github.com/james-callahan/cert-manager-kustomize/tree/main/webhook#usage
As @benjamin-wright mentioned, the following works:
patchesJson6902:
- target:
group: ""
version: v1
kind: ConfigMap
name: my-config-name
patch: |-
- op: replace
path: /metadata/namespace
value: other-namespace
When adding this, I get the output:
# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
Using patches
, though, doesn't work:
patches:
- target:
group: ""
version: v1
kind: ConfigMap
name: my-config-name
patch: |-
- op: replace
path: /metadata/namespace
value: other-namespace
Has anyone found a work around for this using patches
?
@mathe-matician
using the transformers
field to do the patch seems to work:
transformers:
- |-
apiVersion: builtin
kind: PatchTransformer
metadata:
name: fix-cert-namespace
patch: '[{"op": "replace", "path": "/metadata/namespace", "value": "istio-system"}]'
target:
group: cert-manager.io
kind: Certificate
kustomize has an ability to replace all namespaces in one config, setting
namespace: XXX
Naively I thought this meant all unset namepaces, but it actually overwrites all.
This creates a challenge when you have something that uses >1 namespace (e.g. cert-manager, it installs 1 thing into kube-system, rest into cert-manager).
I think maybe we want either:
unset only
oldNS: X -> newNS: Y
Although technically possible to json/strat patch all objects, this is exceptionally tedious when there are many of them.