Kong / kong

🦍 The Cloud-Native API Gateway and AI Gateway.
https://konghq.com/install/#kong-community
Apache License 2.0
38.89k stars 4.78k forks source link

Error: balancers.lua:228: get_balancer(): balancer not found for call-reminder-service.prod-khatabook.80.svc #8359

Closed shail248 closed 2 years ago

shail248 commented 2 years ago

Is there an existing issue for this?

Kong version ($ kong version)

2.6

Current Behavior

Hello All,

I have deployed kong recently in our production environment in EKS.

We are continuously getting this error:

log:2022/01/24 13:25:30 [error] 1096#0: *216396 [lua] balancers.lua:228: get_balancer(): balancer not found for support-panel-backend-service.prod-support-panel.80.svc, will create it, client: 10.1.75.84, server: kong, request: "GET /me HTTP/1.1", host: "support-panel-backend-service.khatabook.com"

Count of this error is close to 10k per day.

Here is the kong ingress controller and proxy image details:

Ingress controller: kong/kubernetes-ingress-controller:1.3 Kong proxy: kong:2.6

Allocated resource to the pods:

          resources:
            limits:
              cpu: 500m
              memory: 1000Mi
            requests:
              cpu: 250m
              memory: 700Mi

We have used below template for deployment:

apiVersion: v1
kind: Namespace
metadata:
  name: kong-khatabook-public
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong-khatabook-public
  labels:
    infra-env: prod
    infra-service: kong-gateway
    infra-product: khatabook
  namespace: kong-khatabook-public
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kong-khatabook-public
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - nodes
  - pods
  - secrets
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  - extensions
  - networking.internal.knative.dev
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  - extensions
  - networking.internal.knative.dev
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - tcpingresses/status
  verbs:
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongplugins
  - kongclusterplugins
  - kongcredentials
  - kongconsumers
  - kongingresses
  - tcpingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-khatabook-public
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-khatabook-public
subjects:
- kind: ServiceAccount
  name: kong-khatabook-public
  namespace: kong-khatabook-public
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-internal: "false"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:ap-south-1:XXXXXXXXX:certificate/XXXXXX-a069-4f31-908a-8020b37aa1a1
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
    service.beta.kubernetes.io/aws-load-balancer-type: elb
    service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5"
    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "prod-platform-kong-lb-access-log"
    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "XXX-XXXX-public"
    service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "30"
#    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
#    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
#    service.beta.kubernetes.io/aws-load-balancer-internal: "false"
#    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:ap-south-1:741386957827:certificate/19df765e-5576-4389-b8ce-b54408ee22c9
#    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
#    service.beta.kubernetes.io/aws-load-balancer-type: nlb
#    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
  labels:
    app: kong-khatabook-public
    infra-env: prod
    infra-service: kong-gateway
    infra-product: khatabook
  name: kong-proxy
  namespace: kong-khatabook-public
spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8000
  selector:
    app: kong-khatabook-public
  externalTrafficPolicy: Local
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  labels:
    infra-env: prod
    infra-service: kong-gateway
    infra-product: khatabook
  name: kong-validation-webhook
  namespace: kong-khatabook-public
spec:
  ports:
  - name: webhook
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app: kong-khatabook-public
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kong-khatabook-public
  namespace: kong-khatabook-public
  labels:
    app.kubernetes.io/instance: kong-khatabook-public
    app: kong-khatabook-public
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kong-khatabook-public
    app.kubernetes.io/version: '2.6'
    infra-env: prod
    infra-service: kong-gateway
    infra-product: khatabook
  annotations:
    app: kong-khatabook-public
    infra-env: prod
    infra-service: kong-gateway
    infra-product: khatabook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kong-khatabook-public
      infra-env: prod
      infra-service: kong-gateway
      infra-product: khatabook
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kong-khatabook-public
        infra-env: prod
        infra-service: kong-gateway
        infra-product: khatabook
      annotations:
        prometheus.io/port: '8100'
        prometheus.io/scrape: 'true'
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/name: kong-khatabook-public
                  app.kubernetes.io/instance: kong-khatabook-public
                  app: kong-khatabook-public
              topologyKey: kubernetes.io/hostname
      nodeSelector:
        infra-node-group: node-group-application
      volumes:
        - name: kong-khatabook-public-prefix-dir
          emptyDir: {}
        - name: kong-khatabook-public-tmp
          emptyDir: {}
      containers:
        - name: ingress-controller
          image: 'kong/kubernetes-ingress-controller:1.3'
          args:
            - /kong-ingress-controller
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: CONTROLLER_ELECTION_ID
              value: kong-khatabook-public-leader
            - name: CONTROLLER_INGRESS_CLASS
              value: kong-khatabook-public
            - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
              value: 'true'
            - name: CONTROLLER_KONG_ADMIN_URL
              value: 'https://localhost:8444'
            - name: CONTROLLER_PUBLISH_SERVICE
              value: kong-khatabook-public/kong-proxy
          resources:
            limits:
              cpu: 150m
              memory: 250Mi
            requests:
              cpu: 100m
              memory: 150Mi
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext: {}
        - name: proxy
          image: 'kong:2.6'
          ports:
            - name: proxy
              containerPort: 8000
              protocol: TCP
            - name: proxy-tls
              containerPort: 8000
              protocol: TCP
            - name: status
              containerPort: 8100
              protocol: TCP
          env:
            - name: KONG_ADMIN_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_ADMIN_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_GUI_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_ADMIN_GUI_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_LISTEN
              value: '127.0.0.1:8444 http2 ssl'
            - name: KONG_CLUSTER_LISTEN
              value: 'off'
            - name: KONG_DATABASE
              value: 'off'
            - name: KONG_KIC
              value: 'on'
            - name: KONG_LUA_PACKAGE_PATH
              value: /opt/?.lua;/opt/?/init.lua;;
            - name: KONG_NGINX_WORKER_PROCESSES
              value: '2'
            - name: KONG_PLUGINS
              value: bundled
            - name: KONG_PORTAL_API_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_PORTAL_API_ERROR_LOG
              value: /dev/stderr
            - name: KONG_PORT_MAPS
              value: '80:8000, 443:8443'
            - name: KONG_PREFIX
              value: /kong_prefix/
            - name: KONG_PROXY_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_PROXY_ERROR_LOG
              value: /dev/stderr
            - name: KONG_PROXY_LISTEN
              value: '0.0.0.0:8000, 0.0.0.0:8443 http2 ssl'
            - name: KONG_STATUS_LISTEN
              value: '0.0.0.0:8100'
            - name: KONG_NGINX_DAEMON
              value: 'off'
            - name: KONG_TRUSTED_IPS
              value: '0.0.0.0/0,::/0'
            - name: KONG_REAL_IP_HEADER
              value: proxy_protocol
            - name: KONG_REAL_IP_RECURSIVE
              value: 'on'
          volumeMounts:
            - name: kong-khatabook-public-prefix-dir
              mountPath: /kong_prefix/
            - name: kong-khatabook-public-tmp
              mountPath: /tmp
          livenessProbe:
            httpGet:
              path: /status
              port: status
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /status
              port: status
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              cpu: 300m
              memory: 500Mi
            requests:
              cpu: 200m
              memory: 250Mi
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - /bin/sleep 15 && kong quit
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext: {}
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: kong-khatabook-public
      serviceAccount: kong-khatabook-public
      automountServiceAccountToken: true
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: kong-khatabook-public
  namespace: kong-khatabook-public
  labels:
    infra-service: kong
    infra-env: prod
    infra-product: khatabook
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: kong-khatabook-public
  minReplicas: 3
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 70
  behavior:
    scaleUp:
      policies:
        - type: Pods
          value: 1
          periodSeconds: 60
        - type: Percent
          value: 10
          periodSeconds: 60
      stabilizationWindowSeconds: 60
      selectPolicy: Max
    scaleDown:
      policies:
        - type: Pods
          value: 1
          periodSeconds: 60
      stabilizationWindowSeconds: 300
      selectPolicy: Max

---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: kong-khatabook-public
  namespace: kong-khatabook-public
  labels:
    infra-service: kong
    infra-env: prod
    infra-product: khatabook
spec:
  selector:
    matchLabels:
      app: kong-khatabook-public
  maxUnavailable: "50%"

---
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
  name: prometheus-khatabook-public
  annotations:
    kubernetes.io/ingress.class: kong-khatabook-public
  labels:
    global: "true"
plugin: prometheus

Let us know if any further info is required
Regards, Shailesh

Expected Behavior

No response

Steps To Reproduce

No response

Anything else?

No response

locao commented 2 years ago

Hi @shail248! Thanks for letting us know. This is a debug message that was assigned the wrong level. It means the upstream entity was found but wasn't loaded yet at the time the request was received. The issue was fixed in #8410 and will be part of the next Kong release. You can safely disregard this message.

mostafabayat commented 2 years ago

I have a similar problem! We experienced some 503 errors on some routes. After investigations, we found out there were some kong pods that had this error (balancer not found for ). and clients get failure to get a peer from the ring-balancer. it is very confusing!! because if I delete the pod the problem disappears! (or even by killing the process that has this error) I changed log level to debug, but there was nothing but this single error! this is a part of our debug logs:

2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.33.96.7
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.33.96.7, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.33.96.7
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.33.96.7 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.33.96.10
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.33.96.10, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.33.96.10
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.33.96.10 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.33.96.8
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.33.96.8, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.33.96.8
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.33.96.8 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.33.96.5
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.33.96.5, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.33.96.5
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.33.96.5 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.46.64.2
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.46.64.2, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.46.64.2
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.46.64.2 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] healthcheck.lua:1126: log(): [healthcheck] (0dc6f45b-8f8d-40d2-a504-473544ee190b:syncer.aaa.8080.svc) Got initial target list (0 targets)
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] healthcheck.lua:1126: log(): [healthcheck] (0dc6f45b-8f8d-40d2-a504-473544ee190b:syncer.aaa.8080.svc) active check flagged as active
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] healthcheck.lua:1126: log(): [healthcheck] (0dc6f45b-8f8d-40d2-a504-473544ee190b:syncer.aaa.8080.svc) Healthchecker started!
2022/02/19 10:44:50 [debug] 1099#0: *1593 [lua] init.lua:1004: balancer(): setting address (try 1): 10.105.145.159:8080
2022/02/19 10:44:50 [debug] 1099#0: *1593 [lua] init.lua:1033: balancer(): enabled connection keepalive (pool=10.105.145.159|8080, pool_size=60, idle_timeout=60, max_requests=100)
2022/02/19 10:44:50 [debug] 1099#0: *1459 [lua] init.lua:1004: balancer(): setting address (try 1): 10.32.32.10:8080
2022/02/19 10:44:50 [debug] 1099#0: *1459 [lua] init.lua:1033: balancer(): enabled connection keepalive (pool=10.32.32.10|8080, pool_size=60, idle_timeout=60, max_requests=100)
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.45.0.12
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.45.0.12, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.45.0.12
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.45.0.12 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.45.0.16
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.45.0.16, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.45.0.16
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.45.0.16 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.34.160.0
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.34.160.0, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.34.160.0
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.34.160.0 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for 10.39.32.10
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:346: f(): dns record type changed for 10.39.32.10, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.39.32.10
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:422: f(): querying dns and updating for 10.39.32.10 completed
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] healthcheck.lua:1126: log(): [healthcheck] (0dc6f45b-8f8d-40d2-a504-473544ee190b:sdk-config.metrix.8080.svc) Got initial target list (0 targets)
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] healthcheck.lua:1126: log(): [healthcheck] (0dc6f45b-8f8d-40d2-a504-473544ee190b:sdk-config.metrix.8080.svc) active check flagged as active
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] healthcheck.lua:1126: log(): [healthcheck] (0dc6f45b-8f8d-40d2-a504-473544ee190b:sdk-config.metrix.8080.svc) Healthchecker started!
2022/02/19 10:44:50 [debug] 1098#0: *1559 [lua] targets.lua:431: queryDns(): querying dns for gateway.xxy.svc
5.213.168.83 - - [19/Feb/2022:10:44:50 +0330] "POST /v2/sdk-error-log/ HTTP/2.0" 200 0 "-" "Dalvik/2.1.0 (Linux; U; Android 11; M2010J19CG Build/RKQ1.201004.002)"
2022/02/19 10:44:50 [error] 1098#0: *1325 [lua] balancers.lua:228: get_balancer(): balancer not found for gateway.xxx.8079.svc, will create it, client: 91.133.219.251, server: kong, request: "POST /v1/events/verify HTTP/2.0", host: "xxx.yyy.zzz", referrer: "https://www.xxx.yy/"
2022/02/19 10:44:50 [debug] 1099#0: *1536 [lua] init.lua:1004: balancer(): setting address (try 1): 10.32.0.3:8080
2022/02/19 10:44:50 [debug] 1099#0: *1536 [lua] init.lua:1033: balancer(): enabled connection keepalive (pool=10.32.0.3|8080, pool_size=60, idle_timeout=60, max_requests=100)
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:431: queryDns(): querying dns for 10.39.96.6
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:346: f(): dns record type changed for 10.39.96.6, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.39.96.6
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:422: f(): querying dns and updating for 10.39.96.6 completed
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:431: queryDns(): querying dns for 10.33.0.7
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:346: f(): dns record type changed for 10.33.0.7, nil -> 1
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:412: f(): updating balancer based on dns changes for 10.33.0.7
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:422: f(): querying dns and updating for 10.33.0.7 completed
2022/02/19 10:44:50 [debug] 1098#0: *1325 [lua] targets.lua:431: queryDns(): querying dns for 10.35.64.4
2022/02/19 10:44:50 [debug] 1099#0: *1399 [lua] init.lua:1004: balancer(): setting address (try 1): 10.100.62.194:80
2022/02/19 10:44:50 [debug] 1099#0: *1399 [lua] init.lua:1033: balancer(): enabled connection keepalive (pool=10.100.62.194|80, pool_size=60, idle_timeout=60, max_requests=100)

kong version 2.7 kubernetes version 1.22

mostafabayat commented 2 years ago

@locao