kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
11.09k stars 2.26k forks source link

Failing to merge two CommonLabels entities #5733

Open gpu-pug opened 4 months ago

gpu-pug commented 4 months ago

What happened?

I've have in one of my 'base' configurations this entity:

commonLabels:
- group: agent.k8s.elastic.co
  kind: Agent
  path: "spec/deployment/podTemplate/metadata/labels"

and in 'overlay' part I've this code:

commonLabels:
- path: spec/deployment/podTemplate/metadata/labels
  create: true
  kind: Agent

When testing this code, I'm having this error:

E0712 14:29:35.025557  434445 run.go:74] "command failed" err="accumulating resources: accumulation err='accumulating resources from './fleet-server': '/home/username/Projects/apm-anthos-configsync/src/overlays/sandbox-v2/fleet-server' must resolve to a file': recursed merging from path '/home/username/Projects/apm-anthos-configsync/src/overlays/sandbox-v2/fleet-server': failed to merge CommonLabels fieldSpec: conflicting fieldspecs"

What did you expect to happen?

It should work flawlessly, but I've found a warkaround: some of the paths should have '/' in the beginning. So, code in overlay looking like this:

commonLabels:
- path: /spec/deployment/podTemplate/metadata/labels
  create: true
  kind: Agent

doesn't produce error at all.

How can we reproduce it (as minimally and precisely as possible)?

I suppose there'll be enough to write down some entity in base:

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent

CommonLabels in base:

commonLabels:
- group: agent.k8s.elastic.co
  kind: Agent
  path: "spec/deployment/podTemplate/metadata/labels"

and CommonLabels in overlay:

commonLabels:
- path: spec/deployment/podTemplate/metadata/labels
  create: true
  kind: Agent

I can try to provide more detailed data if it's needed.

Expected output

No response

Actual output

No response

Kustomize version

v5.0.1

Operating system

Linux

ephesused commented 4 months ago

It looks like the issue is that the base and overlay disagree on whether an absent field path should be created. kustomize cannot merge the two configurations cleanly since it cannot honor both "create" (as configured in the overlay) and "don't create" (as configured in the base).

The leading slash for one of the overlays provides a workaround for the check, and it looks like that is providing the behavior that you're expecting. Other users might expect the opposite behavior, though. The failure looks to be kustomize's way to try to force the user to make an explicit decision on whether field creation should be honored.

stormqueen1990 commented 3 months ago

Hi there, @gpu-pug!

I tried reproducing the issue described, but it seems something is missing. In your described input, the commonLabels field is an array, but our commonLabels field of the Kustomization type is a map[string]string. Could you please provide some extra detail about how your kustomization.yaml file is structured or perhaps a minimal example that reproduces the issue?

Thanks in advance!

/triage needs-information

ephesused commented 3 months ago

@stormqueen1990, here's what I used for investigation.

$ kustomize-v5.0.1 build overlay
Error: merging config <skipping a lot of content here> failed to merge CommonLabels fieldSpec: conflicting fieldspecs

$ 

Note that v5.4.3 has the some behavior. As I implied earlier, if the base kustomizeconfig.yaml is updated to include create: true then things work cleanly.

base/kustomization.yaml

resources:
- resources.yaml

configurations:
- kustomizeconfig.yaml

commonLabels:
  base-key: base-value

base/kustomizeconfig.yaml

commonLabels:
- kind: Agent
  group: agent.k8s.elastic.co
  path: "spec/deployment/podTemplate/metadata/labels"

base/resources.yaml

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
spec:
  version: 8.13.2
  kibanaRef:
    name: kibana
  elasticsearchRefs:
  - name: elasticsearch
  mode: fleet
  fleetServerEnabled: true
  policyID: eck-fleet-server
  deployment:
    replicas: 1
    podTemplate:
      spec:
        serviceAccountName: fleet-server
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0

overlay/kustomization.yaml

resources:
- ../base

configurations:
- kustomizeconfig.yaml

commonLabels:
  overlay-key: overlay-value

overlay/kustomizeconfig.yaml

commonLabels:
- kind: Agent
  path: "spec/deployment/podTemplate/metadata/labels"
  create: true
stormqueen1990 commented 3 months ago

Hi @ephesused, thanks for the info! I agree that this behaviour is still reproducible and with your previous explanation as to why this is happening.

This doesn't seem to be a bug, but it would be helpful to get a more descriptive message for the error and perhaps not print the entire configuration.

/remove-kind bug /kind cleanup /kind documentation /triage accepted