Kong / kong

🦍 The Cloud-Native API Gateway and AI Gateway.
https://konghq.com/install/#kong-community
Apache License 2.0
38.87k stars 4.78k forks source link

Buggy behavior after failed health check recover #12694

Open mhkarimi1383 opened 6 months ago

mhkarimi1383 commented 6 months ago

Is there an existing issue for this?

Kong version ($ kong version)

3.5.0 (With KIC 2.12)

Current Behavior

Sometimes when health check fails and health check recovers Kong response is still 503 for that service

Expected Behavior

Response recovers while Health check recovers

Steps To Reproduce

  1. In K8s environment
  2. Bring up a project and create a Ingress and UpstreamPolicy (With TCP or HTTP health check [TCP Preferred])
  3. configure health check to failure for some time (you will get 503 error)
  4. make that health check to success again (you may get 503 error again

Anything else?

No response

StarlightIbuki commented 6 months ago

The behavior sounds expected to me. The health check status does not update immediately, and the passive health checker cannot predict if the next request will succeed. Could you elaborate?

mhkarimi1383 commented 6 months ago

@StarlightIbuki Hi But after the interval passes it should recover, but it doesn't. Also clearing Kong cache via Admin API fixes the issue

mhkarimi1383 commented 6 months ago

It happens when we have a rolling update on our K8s Deployment

StarlightIbuki commented 6 months ago

@mhkarimi1383 Is the upstream failing in a predictable or manipulatable manner? So that you are sure that the status is not reflecting the fact?

mhkarimi1383 commented 6 months ago

@StarlightIbuki Yes I have sent a request to that pod and monitor that health check endpoint using a blackbox exporter pointing to it's k8s service

StarlightIbuki commented 6 months ago

@mhkarimi1383 Could you share the config that you are using?

mhkarimi1383 commented 6 months ago

@StarlightIbuki

        upstream:
          healthchecks:
            active:
              healthy:
                interval: 5
                successes: 3
              type: tcp
              unhealthy:
                tcp_failures: 1
                interval: 5

Here is my KongIngress spec

StarlightIbuki commented 6 months ago

5s seems a short interval. How long do you wait before inspecting the status?

mhkarimi1383 commented 6 months ago

@StarlightIbuki About 5 minutes

StarlightIbuki commented 6 months ago

I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?

mhkarimi1383 commented 6 months ago

I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?

Yes, Kong said the project is unhealthy but it is healthy, clear king cache fixes the problem

StarlightIbuki commented 6 months ago

I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?

Yes, Kong said the project is unhealthy but it is healthy, clear king cache fixes the problem

Sorry, but let's confirm if my understanding is correct: for step 4, we configure the upstream to back to work again, and we will observe the health checker reporting unhealthy condition?

mhkarimi1383 commented 6 months ago

@StarlightIbuki Yes

github-actions[bot] commented 5 months ago

This issue is marked as stale because it has been open for 14 days with no activity.

ADD-SP commented 3 months ago

I have reproduced this issue locally using the master branch. @mhkarimi1383, thanks for your report.

Internal ticket for tracking: KAG-4588

_format_version: "3.0"

_transform: true

services:
- name: service_1
  host: upstream_1
  routes:
   - name: route_1
     paths:
     - /1

upstreams:
- name: upstream_1
  targets:
  - target: localhost:80
  healthchecks:
    active:
      timeout: 10
      healthy:
        interval: 5
      unhealthy:
        http_statuses: [500]
        http_failures: 1
        interval: 5
mhkarimi1383 commented 3 months ago

Thanks

Sometimes clearing cache will not work and we have to wait (for example 20 minutes) or we have to restart to fix the problem.