vmware-archive / kubecfg

A tool for managing complex enterprise Kubernetes environments as code.
Apache License 2.0
728 stars 62 forks source link

Non-stable ordering #289

Open derrickburns opened 4 years ago

derrickburns commented 4 years ago

When one uses kubecfg to generate manifests (instead of directly writing them to Kubernetes), the order of manifests is unstable. This makes it difficult to track changes.

This problem occurs when serializing objects, not lists.

Please make the serialization order stable. Any order is fine, as long as it is stable.

seh commented 4 years ago

Can you share a few details about which objects are moving around? What are their GVKs, namespaces, and names?

kubecfg takes these details into consideration for sorting objects. It alphabetizes by namespace, name, and object kind. If for some reason your API was changing the OpenAPI schema it serves, changing details like whether a resource is namespaced or not, I could see this order jumping around.

derrickburns commented 4 years ago

Here is my example:

{ 
   foo: [ manifest1, manifest2 ],
   bar: [ manifest3, manifest4],
   baz: [ manifest5 ]
}

where

manifest3 has:

apiVersion: cert-manager.io/v1alpha2    
kind: Certificate

manifest4 has:

apiVersion: gateway.solo.io/v1
kind: VirtualService

All manifests are in the same namespace. All names are distinct.

Issues: manifest3 and manifest4 were swapped when re-running kubecfg.

derrickburns commented 4 years ago
Screen Shot 2020-04-20 at 12 12 36 PM
seh commented 4 years ago

Thank you for the detailed example. What is the name of the VirtualService? Did it change between these two invocations?

derrickburns commented 4 years ago

The names didn't change (otherwise the change would have appeared in the diff).

seh commented 4 years ago

What is the name of the VirtualService?

derrickburns commented 4 years ago

Here is the entire file:

---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  labels:
    app: pomerium
  name: pomerium
  namespace: pomerium
spec:
  chart:
    name: pomerium
    repository: https://helm.pomerium.io
    version: 5.0.3
  releaseName: pomerium
  values:
    annotations:
      configmap.reloader.stakater.com/reload: pomerium
      secret.reloader.stakater.com/reload: pomerium
    authenticate:
      idp:
        serviceAccount: true
    config:
      existingSecret: pomerium
      policy:
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://dev-portal.dev.tidepool.org
        to: http://dev-portal.gloo-system.svc.cluster.local:8080
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://apiserver.dev.tidepool.org
        to: http://apiserver-ui.gloo-system.svc.cluster.local:8080
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://envoy-admin.dev.tidepool.org
        to: http://gateway-proxy.gloo-system.svc.cluster.local:19000
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://glooe-monitoring.dev.tidepool.org
        to: http://glooe-grafana.gloo-system.svc.cluster.local
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://glooe-metrics.dev.tidepool.org
        to: http://glooe-prometheus-server.gloo-system.svc.cluster.local
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://goldilocks.dev.tidepool.org
        to: http://goldilocks-dashboard.goldilocks.svc.cluster.local
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://linkerd-web.dev.tidepool.org
        to: http://linkerd-web.linkerd.svc.cluster.local:8084
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://grafana.dev.tidepool.org
        to: http://monitoring-prometheus-operator-grafana.monitoring.svc.cluster.local
      - allow_websockets: true
        allowed_groups:
        - eng@tidepool.org
        allowed_users: []
        from: https://tracing.dev.tidepool.org
        to: http://jaeger-query.tracing.svc.cluster.local:16686
      rootDomain: dev.tidepool.org
    extraEnv:
      log_level: debug
    forwardAuth:
      enabled: false
    ingress:
      enabled: false
    service:
      type: ClusterIP
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  labels:
    protocol: http
    type: pomerium
  name: proxy-http
  namespace: pomerium
spec:
  displayName: proxy-http
  virtualHost:
    domains:
    - dev-portal.dev.tidepool.org
    - apiserver.dev.tidepool.org
    - envoy-admin.dev.tidepool.org
    - glooe-monitoring.dev.tidepool.org
    - glooe-metrics.dev.tidepool.org
    - goldilocks.dev.tidepool.org
    - linkerd-web.dev.tidepool.org
    - grafana.dev.tidepool.org
    - tracing.dev.tidepool.org
    routes:
    - matchers:
      - prefix: /
      redirectAction:
        httpsRedirect: true
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  labels:
    protocol: https
    type: pomerium
  name: proxy-https
  namespace: pomerium
spec:
  displayName: proxy-https
  sslConfig:
    secretRef:
      name: pomerium-tls
      namespace: pomerium
    sniDomains:
    - dev-portal.dev.tidepool.org
    - apiserver.dev.tidepool.org
    - envoy-admin.dev.tidepool.org
    - glooe-monitoring.dev.tidepool.org
    - glooe-metrics.dev.tidepool.org
    - goldilocks.dev.tidepool.org
    - linkerd-web.dev.tidepool.org
    - grafana.dev.tidepool.org
    - tracing.dev.tidepool.org
  virtualHost:
    domains:
    - dev-portal.dev.tidepool.org
    - apiserver.dev.tidepool.org
    - envoy-admin.dev.tidepool.org
    - glooe-monitoring.dev.tidepool.org
    - glooe-metrics.dev.tidepool.org
    - goldilocks.dev.tidepool.org
    - linkerd-web.dev.tidepool.org
    - grafana.dev.tidepool.org
    - tracing.dev.tidepool.org
    routes:
    - matchers:
      - prefix: /
      options:
        headerManipulation:
          requestHeadersToRemove:
          - Origin
        upgrades:
        - websocket:
            enabled: true
      routeAction:
        single:
          upstream:
            name: pomerium-proxy
            namespace: pomerium
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  labels:
    protocol: https
    type: pomerium
  name: authorize
  namespace: pomerium
spec:
  displayName: authorize
  sslConfig:
    secretRef:
      name: pomerium-tls
      namespace: pomerium
    sniDomains:
    - authorize.dev.tidepool.org
  virtualHost:
    domains:
    - authorize.dev.tidepool.org
    routes:
    - matchers:
      - prefix: /
      routeAction:
        single:
          upstream:
            name: pomerium-authorize
            namespace: pomerium
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  labels:
    protocol: https
    type: pomerium
  name: authenticate
  namespace: pomerium
spec:
  displayName: authenticate
  sslConfig:
    secretRef:
      name: pomerium-tls
      namespace: pomerium
    sniDomains:
    - authenticate.dev.tidepool.org
  virtualHost:
    domains:
    - authenticate.dev.tidepool.org
    routes:
    - matchers:
      - prefix: /
      routeAction:
        single:
          upstream:
            name: pomerium-authenticate
            namespace: pomerium
---
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  labels:
    app: pomerium-proxy
  name: pomerium-proxy
  namespace: pomerium
spec:
  discoveryMetadata: {}
  kube:
    selector:
      app.kubernetes.io/instance: pomerium
      app.kubernetes.io/name: pomerium-proxy
    serviceName: pomerium-proxy
    serviceNamespace: pomerium
    servicePort: 443
  sslConfig:
    secretRef:
      name: pomerium-proxy-tls
      namespace: pomerium
---
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  labels:
    app: pomerium-authenticate
  name: pomerium-authenticate
  namespace: pomerium
spec:
  discoveryMetadata: {}
  kube:
    selector:
      app.kubernetes.io/instance: pomerium
      app.kubernetes.io/name: pomerium-authenticate
    serviceName: pomerium-authenticate
    serviceNamespace: pomerium
    servicePort: 443
  sslConfig:
    secretRef:
      name: pomerium-authenticate-tls
      namespace: pomerium
---
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  labels:
    app: pomerium-authorize
  name: pomerium-authorize
  namespace: pomerium
spec:
  discoveryMetadata: {}
  kube:
    selector:
      app.kubernetes.io/instance: pomerium
      app.kubernetes.io/name: pomerium-authorize
    serviceName: pomerium-authorize
    serviceNamespace: pomerium
    servicePort: 443
  sslConfig:
    secretRef:
      name: pomerium-authorize-tls
      namespace: pomerium
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: authenticate.dev.tidepool.org
  namespace: pomerium
spec:
  commonName: authenticate.dev.tidepool.org
  dnsNames:
  - authenticate.dev.tidepool.org
  - authorize.dev.tidepool.org
  - dev-portal.dev.tidepool.org
  - apiserver.dev.tidepool.org
  - envoy-admin.dev.tidepool.org
  - glooe-monitoring.dev.tidepool.org
  - glooe-metrics.dev.tidepool.org
  - goldilocks.dev.tidepool.org
  - linkerd-web.dev.tidepool.org
  - grafana.dev.tidepool.org
  - tracing.dev.tidepool.org
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-production
  secretName: pomerium-tls
seh commented 4 years ago

Ah, now I see what's wrong: only the delete and update subcommands sort the objects. In your case, presumably running show, you're falling prey to the the JSON reader's use of map iteration, which we know is deliberately unspecified and variable.

Thank you for providing the additional detail along the way.

derrickburns commented 4 years ago

Yes, I am using the show subcommand. I implement GitOps. I use kubecfg to generate the manifests.

mkmik commented 4 years ago

We should fix that and have "show" behave like update