Closed siddique670 closed 2 weeks ago
Me too facing the same issue. When we restart it's fine but after a while it's repeating the same.
I faced the same issue, with the same symptoms, as described in this topic.
I used this Vagrantfile
In my case, the problem was with containerd settings.
I regenerate the /etc/containerd/config.toml
file, and it fixes the problem.
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
also if you use Systemd cgroup driver you need to do
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
after that, you can restart the containerd service, and it should help
@siddique670 Have u found the solution ?
@pythonkid2 Issue is still there, it's need to add some rules in ubuntu firewall with port or else?
Same issue, I add the right rules on UFW, but I think it's not the root cause because after reboot the cluster is ok and after sometimes the error appears, and I need to restart the controlplane to "resolve" the problem.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What Happened?
As per attached below screnshot getting the "connection refused" error.
Attach the log file
I configured the master node on ubuntu it has installed on VirtualBox -07. As per below ubuntu version.
root@master:~# cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.1 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.1 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy
Docker service is also running fine as per the below screenshot.
Container also running fine.
Swap is also i did off.
IP netsed.
systemctl status kubelet.service
kubectl config view
sudo systemctl restart kubelet.service
Please help us with where i am working as per the above issue and attached screenshot, If need any info and logs please let me know.
Operating System
Ubuntu
Driver
VirtualBox