aws-controllers-k8s / community

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes
https://aws-controllers-k8s.github.io/community/
Apache License 2.0
2.36k stars 248 forks source link

FluxCD - resources with references are always configured #1880

Open gecube opened 10 months ago

gecube commented 10 months ago

Describe the bug

If we create a resource that utilises ref from another resource like

apiVersion: ec2.services.k8s.aws/v1alpha1
kind: RouteTable
metadata:
  name: production-private-route-table-eu-west-2c
  namespace: infra-production
spec:
  vpcRef:
    from:
      name: production
  routes:
    - destinationCIDRBlock: 0.0.0.0/0
      natGatewayRef:
        from:
          name: natgateway-eu-west-2c
    - destinationCIDRBlock: 10.0.0.0/16
      vpcPeeringConnectionID: pcx-0a7197b4f5ced6f01

then the target resource looks like:

spec:
  routes:
    - destinationCIDRBlock: 10.0.0.0/16
      vpcPeeringConnectionID: pcx-0a7197b4f5ced6f01
    - destinationCIDRBlock: 0.0.0.0/0
      natGatewayID: nat-07a301987eaa97785
  tags:
    - key: services.k8s.aws/namespace
      value: infra-production
    - key: services.k8s.aws/controller-version
      value: ec2-1.0.3
  vpcRef:
    from:
      name: production

So we can clearly see that natGatewayID was substituted. So it means that on every reconciliation by flux resource is changed twice. Also it is curious that vpcRef is not substituted.

Steps to reproduce

Just apply the first manifest

Expected outcome

No idea. Needs to be discussed. Probably - make substitutions in status field. Or use admission controller. No idea. The only thing I can propose - ask not to change the original description, otherwise I will need to find a way to remove a fields from reconciliation.

gecube commented 9 months ago

linked to #1898

a-hilaly commented 9 months ago

This could be very likely a bug with ACK references.. we'll have to investigate this further. Thank you for reporting this @gecube !

ack-bot commented 1 month ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

gecube commented 1 month ago

/remove-lifecycle stale