kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.31k stars 4.15k forks source link

Kong pod doesn't start due probes fail #9397

Closed JoniJnm closed 2 weeks ago

JoniJnm commented 1 month ago

What happened?

Kong pod doesn't start due the probes fail

What did you expect to happen?

All the pods should start

How can we reproduce it (as minimally and precisely as possible)?

I'm installing it with:

helm upgrade --install dashboard \
  -n dashboard \
  --create-namespace \
  --version 7.5.0 \
  -f my_values.yaml \
  kubernetes-dashboard/kubernetes-dashboard

my-values.yaml:

app:
  ingress:
    enabled: true
kong:
  admin:
    addresses:
      - 127.0.0.1
  status:
    addresses:
      - 127.0.0.1
  proxy:
    addresses:
      - 127.0.0.1

I'm not using ipv6, that's why the addresses property

Anything else we need to know?

$ kubectl get pods -n dashboard
NAME                                                              READY   STATUS    RESTARTS   AGE
dashboard-kong-5dc5f8449c-ttnbv                                   0/1     Running   0          46s
dashboard-kubernetes-dashboard-api-655b77cc56-k28xm               1/1     Running   0          46s
dashboard-kubernetes-dashboard-auth-76f8f46444-skm42              1/1     Running   0          46s
dashboard-kubernetes-dashboard-metrics-scraper-6cf856bdd6-xzd88   1/1     Running   0          46s
dashboard-kubernetes-dashboard-web-6cfb46d455-wklw2               1/1     Running   0          46s
$ kubectl logs -n dashboard dashboard-kong-5dc5f8449c-59tg5
Defaulted container "proxy" out of: proxy, clear-stale-pid (init)
2024/08/21 12:04:15 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /kong_prefix/nginx.conf:7
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /kong_prefix/nginx.conf:7
2024/08/21 12:04:15 [notice] 1#0: [lua] init.lua:776: init(): [request-debug] token for request debugging: c884fb4b-a462-4df0-bb9f-7a2ce02485ca
2024/08/21 12:04:15 [notice] 1#0: using the "epoll" event method
2024/08/21 12:04:15 [notice] 1#0: openresty/1.25.3.1
2024/08/21 12:04:15 [notice] 1#0: OS: Linux 6.8.0-40-generic
2024/08/21 12:04:15 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 65536:65536
2024/08/21 12:04:15 [notice] 1#0: start worker processes
2024/08/21 12:04:15 [notice] 1#0: start worker process 1319
2024/08/21 12:04:15 [notice] 1319#0: *1 [lua] init.lua:259: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2024/08/21 12:04:15 [notice] 1319#0: *1 [lua] init.lua:259: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2024/08/21 12:04:15 [notice] 1319#0: *1 [kong] init.lua:589 declarative config loaded from /kong_dbless/kong.yml, context: init_worker_by_lua*
$ kubectl events --for pod/dashboard-kong-5dc5f8449c-59tg5 --watch -n dashboard
LAST SEEN   TYPE     REASON      OBJECT                                MESSAGE
4m49s       Normal   Scheduled   Pod/dashboard-kong-5dc5f8449c-59tg5   Successfully assigned dashboard/dashboard-kong-5dc5f8449c-59tg5 to ubuntu
4m49s       Normal   Pulled      Pod/dashboard-kong-5dc5f8449c-59tg5   Container image "kong:3.6" already present on machine
4m49s       Normal   Created     Pod/dashboard-kong-5dc5f8449c-59tg5   Created container clear-stale-pid
4m49s       Normal   Started     Pod/dashboard-kong-5dc5f8449c-59tg5   Started container clear-stale-pid
4m3s (x2 over 4m48s)   Normal   Pulled      Pod/dashboard-kong-5dc5f8449c-59tg5   Container image "kong:3.6" already present on machine
4m3s (x2 over 4m48s)   Normal   Created     Pod/dashboard-kong-5dc5f8449c-59tg5   Created container proxy
4m3s (x2 over 4m48s)   Normal   Started     Pod/dashboard-kong-5dc5f8449c-59tg5   Started container proxy
3m29s (x6 over 4m39s)   Warning   Unhealthy   Pod/dashboard-kong-5dc5f8449c-59tg5   Liveness probe failed: Get "http://10.1.243.234:8100/status": dial tcp 10.1.243.234:8100: connect: connection refused
3m29s (x8 over 4m39s)   Warning   Unhealthy   Pod/dashboard-kong-5dc5f8449c-59tg5   Readiness probe failed: Get "http://10.1.243.234:8100/status/ready": dial tcp 10.1.243.234:8100: connect: connection refused
4m19s                   Normal    Killing     Pod/dashboard-kong-5dc5f8449c-59tg5   Container proxy failed liveness probe, will be restarted
4m3s                    Warning   FailedPreStopHook   Pod/dashboard-kong-5dc5f8449c-59tg5   PreStopHook failed

What browsers are you seeing the problem on?

No response

Kubernetes Dashboard version

7.5.0

Kubernetes version

1.29

Dev environment

$ microk8s version
MicroK8s v1.29.7 revision 7018
$ helm version --short
v3.15.4+gfa9efb0
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04 LTS
Release:    24.04
Codename:   noble
floreks commented 1 month ago

This looks like a configuration issue on your side. You can easily confirm if the default installation works (without altering values).

JoniJnm commented 3 weeks ago

the default config won't work because ipv6 is not enabled in the cluster: https://github.com/kubernetes/dashboard/issues/9052

floreks commented 3 weeks ago

There is no way to make it work both for ipv6 enabled and disabled clusters by default. Currently, we assume that ipv6 is available too. If ipv4 or ipv6 is disabled then it will require some manual configuration changes to make it work.

JoniJnm commented 2 weeks ago

moving to headlamp