Closed mrbq closed 2 weeks ago
/triage needs-information
Could you please also add information on how are you trying to apply these resources (kustomize build
vs kubectl apply -k
, etc)?
I am executing kubectl apply -k
I am executing
kubectl apply -k
The integration of Kustomize into kubectl
is maintained in the k/kubernetes repository. I wasn't able to reproduce the issue when using kubectl
v1.30; could you please confirm what version of Kustomize is embedded in your kubectl
by running kubectl version
?
If you cannot upgrade your kubectl
version, an alternative is to use kustomize build
and separately run kubectl apply
. That will allow you to build the manifests with a different version of Kustomize and then apply the generated manifests as a separate step.
/kind support
Mind that you need to have the images
field set in kustomization.yaml
for it to fail.
I've just installed kustomize 5.4.2 and it can be reproduced with that version, single resource and with the image specified.
The reproduction steps in the opening comment are not complete. If I'm following correctly, this is the problem scenario:
problem/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
images:
- name: busybox
newName: alpine
newTag: "3.6"
problem/resources.yaml
apiVersion: infinispan.org/v2alpha1
kind: Cache
metadata:
name: mycache
namespace: my-namespace
spec:
clusterName: infinispan
name: mycache
template: <infinispan><cache-container><distributed-cache name="mycache" mode="SYNC" owners="2"><expiration lifespan="5000" max-idle="3000" /></distributed-cache></cache-container></infinispan>
---
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.29.0
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
$ kustomize build problem
Error: considering field 'spec/template/spec/containers[]/image' of object Cache.v2alpha1.infinispan.org/mycache.my-namespace: expected sequence or mapping node
A solution is to separate the Cache
from the resources that need image conversion. As far as I'm aware, kustomize
always uses its image transformer defaults - I believe any custom configurations are appended to the default (as opposed to overriding the default). So, move the problem resource away from the image transforms:
solution/caches/resources.yaml
apiVersion: infinispan.org/v2alpha1
kind: Cache
metadata:
name: mycache
namespace: my-namespace
spec:
clusterName: infinispan
name: mycache
template: <infinispan><cache-container><distributed-cache name="mycache" mode="SYNC" owners="2"><expiration lifespan="5000" max-idle="3000" /></distributed-cache></cache-container></infinispan>
solution/kustomization.yaml
resources:
- pods
- caches/resources.yaml
solution/pods/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
images:
- name: busybox
newName: alpine
newTag: "3.6"
solution/pods/resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.29.0
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
$ kustomize build solution
apiVersion: infinispan.org/v2alpha1
kind: Cache
metadata:
name: mycache
namespace: my-namespace
spec:
clusterName: infinispan
name: mycache
template: <infinispan><cache-container><distributed-cache name="mycache" mode="SYNC"
owners="2"><expiration lifespan="5000" max-idle="3000" /></distributed-cache></cache-container></infinispan>
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: myapp-pod
spec:
containers:
- command:
- sh
- -c
- echo The app is running! && sleep 3600
image: alpine:3.6
name: myapp-container
I can confirm that to split the solution into two kustomizations.yaml and the aggregate solves the problem
@mrbq, I know that's not satisfying, but at least it lets you move forward. It would be nice if kustomize
allowed one to ignore the default image paths somehow, but I'm not aware of a way to do that.
What happened?
When creating a custom resource, specifically an
Infinispan
operatorCache
resource. with the following structure:This is a valid resource in kubernetes, it can be applied with no modification.
When the kustomize overlay is applied it fails:
If the
spec.template
field is not present it works.What did you expect to happen?
The resource to be created.
How can we reproduce it (as minimally and precisely as possible)?
Expected output
Actual output
Kustomize version
5.2.1
Operating system
Windows