k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
26.62k stars 2.24k forks source link

ingresses stop working every now and then #10054

Open myyc opened 2 weeks ago

myyc commented 2 weeks ago

Environmental Info: K3s Version:

k3s version v1.29.3+k3s1 (8aecc26b) go version go1.21.8

Node(s) CPU architecture, OS, and Version:

Linux n1 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux Linux n2 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux

Cluster Configuration:

2 nodes. 1 server, 1 agent

Describe the bug: some ingresses stop working after a while, it seems like there are networking issues between the nodes. i have no firewall configured between them and they can otherwise talk to each other. restarting coredns and the agent on node n2 (always that one) fixes things temporarily.

Steps To Reproduce:

This sounds more like an issue with my configuration than a bug. Any clue how to debug this? Should i just wipe the configuration in n2 and reinstall?

dereknola commented 2 weeks ago

You would need to provide k3s configuration and logs during the event for us to help you. There isn't enough for us to act on.

myyc commented 2 weeks ago

which k3s configuration exactly, as in which files?

i checked k3s-agent's logs and there isn't anything meaningful, e.g. yesterday logs stopped at midnight, when everything worked fine, but as soon as i restarted k3s-agent, this appeared:

May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 2497 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 2920 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 3191 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 3525 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 4335 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 79756 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.

any clue? any more logs i should inspect?

brandond commented 1 week ago

Please attach the complete logs from the time period in question. Those messages are all from systemd, not k3s. They are normal to see, as container processes remain running while k3s itself is stopped.

myyc commented 1 week ago

can you tell me which logs specifically? as per the FAQ:

i'm sort of blind right now, i am trying to connect to a specific ingress, it says 404 page not found and i can't really see any info in the logs i'm checking. the only (non-realtime) message i see is in the traefik pod logs, e.g.

time="2024-05-08T10:58:41Z" level=error msg="Skipping service: no endpoints found" serviceName=blahblah namespace=stuff servicePort="&ServiceBackendPort{Name:,Number:8000,}" providerName=kubernetes ingress=blahblah
brandond commented 1 week ago

OK, well "can't connect to" is not really the same as "get a 404 response from". In this case you have specific logs from traefik indicating that there are no endpoints for that service, so you'd want to check on the pods backing that service and see why they're not ready.

myyc commented 1 week ago

i mentioned before that pods and services are fine, i can port forward and access the service without issues. the issue isn't always the same. earlier on it was 404, now it's gateway timeout. i just restarted k3s-agent again and it's all fine.

i'll ask again. what is the correct way to debug this?

brandond commented 1 week ago

Pretty much just standard linux/kubernetes stuff...

  1. Journald logs - k3s on the servers, k3s-agent on agents
  2. Pod events - kubectl describe pod -n <NAMESPACE> <POD>, check for events, restarts, failed health checks, and so on
  3. Check service endpoints - kubectl describe service -n <NAMESPACE> <SERVICE>

Note that you will probably need to catch this very close in time to when you're unable to reach the site via the ingress.

For some reason the service's endpoints are going away at times. I get that you can port-forward to it and such, but you need to figure out why the endpoints are occasionally being removed from the service. This usually indicates that the pods are failing health-checks or are being restarted or recreated for some other reason.

myyc commented 1 week ago

they're not really "occasionally" removed. they always are. but it only applies to those that are on that node. once that happens they will stay that way until i restart k3s-agent on said node. anyway, thanks for the help. i'll investigate.