kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.93k stars 1.46k forks source link

Support 'kubectl patch' strategic merge to allow new host entries. #1379

Closed joneal closed 3 years ago

joneal commented 4 years ago

I am deploying an ALB Ingress controller with initial ingress to a service (foo-dev) in a 'dev' namespace using the following kubectl command and Ingress file:

kubectl apply -f eks-preprod-ingress.yaml -n dev

where eks-preprod-ingress.yaml is

apiVersion: extensions/v1beta1
kind: Ingress 
metadata:
  name: eks-preprod-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
  labels:
    app: eks-preprod-ingress
spec:
  rules:
    - host: foo-dev.sgn.example.internal
      http:
        paths:
        - backend:
            serviceName: foo-dev
            servicePort: 80`

The result

$ kubectl get ingress -n dev eks-preprod-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
....
spec:
  rules:
  - host: foo-dev.sgn.example.internal
    http:
      paths:
      - backend:
          serviceName: foo-dev
          servicePort: 80
....

Everything works great...a new ALB is created with a listener/rule to a newly created target group. I can create an A record in Route 53 and access my service.

Through our release process (Azure DevOps), I want to add new services and host entries in the future to the existing ingress/ALB. Deploying new services is no problem. However, adding a new host entry to the existing eks-preprod-ingress is problematic. I would like to use the kubectl patch command as such

kubectl patch ingress eks-preprod-ingress -p "$(cat game-patch-dev.yaml)" -n dev

where game-patch-dev.yaml is a patch with a new host entry

spec:
  rules:
    - host: game-dev.sgn.example.internal
      http:
        paths:
        - backend:
           serviceName: game-dev
           servicePort: 80

When I run the patch command, the new 'game-dev' host entry REPLACES the 'foo-dev' entry. The operation is not doing a strategic merge??

The result:

$ kubectl get ingress eks-preprod-ingress -n dev -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
....
spec:
  rules:
  - host: game-dev.sgn.example.internal
    http:
      paths:
      - backend:
          serviceName: game-dev
          servicePort: 80
....

Is there a recommended what of adding new host entries to the ingress? I've tried using the patch with --type=json with 'jq' operations, but this led to unspeakable horrors.

Note that adding a new host also needs to be an idempotent operation, as the CI/CD process will update the service but the host will remain the same, so patch operations should not add additional host entries that are identical to existing entries.

kishorj commented 4 years ago

The kubernetes Ingress rule object does not have a patchStrategy defined, so for straregic patch request, the default patch strategy of replace takes effect.

The json patch is one option, what issues did you encounter?

joneal commented 4 years ago

As new services are created/updated from our CI/CD system, a corresponding host entry needs to be created in the Ingress resource. Because the strategic merge is not supported, I tried to do this with jq.

Example: $ kubectl -n dev patch ingress eks-preprod-ingress --type "json" -p '[{"op":"add","path":"/spec/rules/-","value":{"host":"game3-dev.sgn.samtec.internal","http":{"paths":[{"backend":{"serviceName":"game3-dev","servicePort":80 }}]} } }]' Here a new host is added, but subsequent updates to the service would add additional host entries of the same type i.e. this is not an idempotent operation.

So I thought I could perhaps check the array of host entries and only add the entry if it does not already exist in the hosts entries array.

$ kubectl get ingress eks-preprod-ingress -n dev -o json | jq '[.spec.rules[].host] | index("game2-dev.sgn.samtec.internal")'

But I soon gave up at this point because the solution was getting clunky and we can manually update the ingress because we only have a handful of services.

Not supporting a strategic merge puts a burden on the developer if not all the services are known at the time the ingress resource is created.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1379#issuecomment-771172899): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
kmorwath commented 1 year ago

Why this issue was closed without a solution - even a "won't do", with an explanation? Supporting a strategic merge for Ingress rules would be a welcome addition - when services and deployments are created dinamically and Ingresses needs to be updated to access them without creating a new ingress each time - which could in turn need to update firewall rules, etc.