Kong / kong

🦍 The Cloud-Native API Gateway and AI Gateway.
https://konghq.com/install/#kong-community
Apache License 2.0
38.89k stars 4.78k forks source link

failed fetching KongUpstreamPolicy after upgrade to Kong 3.6 #12661

Closed skybalsamoan closed 5 months ago

skybalsamoan commented 6 months ago

Is there an existing issue for this?

Kong version ($ kong version)

3.6

Current Behavior

After upgrading from Kong 3.5 to 3.6 I receive errors while validating KongUpstreamPolicies used as healthchecks with this error: 2024-02-28T09:24:23Z error failed fetching KongUpstreamPolicy: KongUpstreamPolicy mynamespace/kong-health-check not found {"name": "myservice", "namespace": "mynamespace", "GVK": "/v1, Kind=Service", "error": "resource processing failed"}

Expected Behavior

As any breaking change seems to be reported this should work as in previous version

Steps To Reproduce

  1. eks cluster
  2. kong installed via Helm chart using 2.36.0 version in DB-Less mode with this values
    
    deployment:
    kong:
    enabled: true
    serviceAccount:
    create: true
    automountServiceAccountToken: false
    prefixDir:
    sizeLimit: 256Mi
    tmpDir:
    sizeLimit: 1Gi

admin: enabled: false

env: database: "off" router_flavor: "traditional" nginx_worker_processes: "auto" proxy_access_log: /dev/stdout proxy_stream_access_log: /dev/stdout admin_access_log: "off" admin_gui_access_log: "off" portal_api_access_log: "off" proxy_error_log: /dev/stderr admin_error_log: /dev/stderr admin_gui_error_log: /dev/stderr portal_api_error_log: /dev/stderr prefix: /kong_prefix/ trusted_ips: 0.0.0.0/0,::0 real_ip_recursive: "on" real_ip_header: X-Forwarded-For headers: "off" plugins: bundled

status: enabled: true http: enabled: true containerPort: 8100

proxy: enabled: true annotations:

AWS-LOAD-BALANCER-CONTROLLER RELATED

service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-subnets: ${private_subnets}
service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true, deletion_protection.enabled=false
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/status"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8100"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-security-groups: ${kong_nlb_sg}
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${certificate_arn}
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS13-1-2-2021-06"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=60, deregistration_delay.connection_termination.enabled=true, stickiness.enabled=false, stickiness.type=source_ip, preserve_client_ip.enabled=true, proxy_protocol_v2.enabled=false
##PROMETHEUS RELATED
prometheus.io/port: "8100"
prometheus.io/scrape: true
prometheus.io/path: /metrics

type: LoadBalancer loadBalancerClass: service.k8s.aws/nlb

labels: enable-metrics: "true"

http:

Enable plaintext HTTP listen for the proxy

enabled: true
servicePort: 80
containerPort: 8000

tls: enabled: true servicePort: 443 containerPort: 8443 overrideServiceTargetPort: 8000 parameters: []

ingressController: enabled: true resources: limits: cpu: 100m memory: 256M requests: cpu: 100m memory: 256M

postgresql: enabled: false

affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:

topologySpreadConstraints: []

- maxSkew: 1

topologyKey: topology.kubernetes.io/zone

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

foo: bar

updateStrategy: type: RollingUpdate rollingUpdate:

maxSurge: "25%"

maxUnavailable: "0%"

If you want to specify resources, uncomment the following

lines, adjust them as necessary, and remove the curly braces after 'resources:'.

resources: limits: cpu: 900m memory: 1792M requests: cpu: 900m memory: 1792M

Node labels for pod assignment

Ref: https://kubernetes.io/docs/user-guide/node-selection/

nodeSelector: {}

podAnnotations: {}

kuma.io/gateway:

traffic.sidecar.istio.io/includeInboundPorts: ""

Labels to be added to Kong pods

podLabels: {}

Kong pod count.

It has no effect when autoscaling.enabled is set to true

replicaCount: 1

Enable autoscaling using HorizontalPodAutoscaler

When configuring an HPA, you must set resource requests on all containers via

"resources" and, if using the controller, "ingressController.resources" in values.yaml

autoscaling: enabled: true minReplicas: 1 maxReplicas: 3 behavior: {}

targetCPUUtilizationPercentage only used if the cluster doesn't support autoscaling/v2 or autoscaling/v2beta

targetCPUUtilizationPercentage:

Otherwise for clusters that do support autoscaling/v2 or autoscaling/v2beta, use metrics

metrics:

Kong Pod Disruption Budget

podDisruptionBudget: enabled: false

Uncomment only one of the following when enabled is set to true

maxUnavailable: "50%"

minAvailable: "100%"

serviceMonitor: enabled: false namespace: metrics

enterprise: enabled: false

manager: enabled: false

portal: enabled: false

portalapi: enabled: false

clustertelemetry: enabled: false

extraObjects:

Anything else?

No response

dschniepp commented 6 months ago

We are facing the same issue, though I think that is an issue with the https://github.com/Kong/kubernetes-ingress-controller, from what I can see.

skybalsamoan commented 6 months ago

maybe, but because plugins are included in Kong Gateway I was unsure if the problem occurs due to a change to the plugin not documented AFAIK or due to a change in KIC. have you had the chance to solve the problem?

dschniepp commented 6 months ago

No, I found the issue. Because we are facing the same.

tomcatu commented 6 months ago

Is there any solution, I have the same problem.

dschniepp commented 6 months ago

@skybalsamoan can you create the issue in the KIC repo, or move it to the KIC? I am not sure if this topic get's attention in here.

carlosrmendes commented 6 months ago

facing the same issue

carlosrmendes commented 6 months ago

issue #5729 created in Kong/kubernetes-ingress-controller

StarlightIbuki commented 5 months ago

Closing as this issue is tracked in KIC repo.