Open myyc opened 2 weeks ago
You would need to provide k3s configuration and logs during the event for us to help you. There isn't enough for us to act on.
which k3s configuration exactly, as in which files?
i checked k3s-agent's logs and there isn't anything meaningful, e.g. yesterday logs stopped at midnight, when everything worked fine, but as soon as i restarted k3s-agent, this appeared:
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 2497 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 2920 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 3191 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 3525 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 4335 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 02 08:41:20 n2 systemd[1]: k3s-agent.service: Found left-over process 79756 (containerd-shim) in control group while starting unit. Ignoring.
May 02 08:41:20 n2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
any clue? any more logs i should inspect?
Please attach the complete logs from the time period in question. Those messages are all from systemd, not k3s. They are normal to see, as container processes remain running while k3s itself is stopped.
can you tell me which logs specifically? as per the FAQ:
i'm sort of blind right now, i am trying to connect to a specific ingress, it says 404 page not found and i can't really see any info in the logs i'm checking. the only (non-realtime) message i see is in the traefik pod logs, e.g.
time="2024-05-08T10:58:41Z" level=error msg="Skipping service: no endpoints found" serviceName=blahblah namespace=stuff servicePort="&ServiceBackendPort{Name:,Number:8000,}" providerName=kubernetes ingress=blahblah
OK, well "can't connect to" is not really the same as "get a 404 response from". In this case you have specific logs from traefik indicating that there are no endpoints for that service, so you'd want to check on the pods backing that service and see why they're not ready.
i mentioned before that pods and services are fine, i can port forward and access the service without issues. the issue isn't always the same. earlier on it was 404, now it's gateway timeout. i just restarted k3s-agent again and it's all fine.
i'll ask again. what is the correct way to debug this?
Pretty much just standard linux/kubernetes stuff...
k3s
on the servers, k3s-agent
on agentskubectl describe pod -n <NAMESPACE> <POD>
, check for events, restarts, failed health checks, and so onkubectl describe service -n <NAMESPACE> <SERVICE>
Note that you will probably need to catch this very close in time to when you're unable to reach the site via the ingress.
For some reason the service's endpoints are going away at times. I get that you can port-forward to it and such, but you need to figure out why the endpoints are occasionally being removed from the service. This usually indicates that the pods are failing health-checks or are being restarted or recreated for some other reason.
they're not really "occasionally" removed. they always are. but it only applies to those that are on that node. once that happens they will stay that way until i restart k3s-agent on said node. anyway, thanks for the help. i'll investigate.
Environmental Info: K3s Version:
k3s version v1.29.3+k3s1 (8aecc26b) go version go1.21.8
Node(s) CPU architecture, OS, and Version:
Linux n1 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux Linux n2 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
Cluster Configuration:
2 nodes. 1 server, 1 agent
Describe the bug: some ingresses stop working after a while, it seems like there are networking issues between the nodes. i have no firewall configured between them and they can otherwise talk to each other. restarting coredns and the agent on node n2 (always that one) fixes things temporarily.
Steps To Reproduce:
This sounds more like an issue with my configuration than a bug. Any clue how to debug this? Should i just wipe the configuration in n2 and reinstall?