Open zenhighzer opened 2 years ago
I was recently working on a similar issue, and I determined the proxy wasn't set correctly in the /etc/environment
or the /etc/profile.d/proxy.sh
, do you get anything running https://webhook-service.metallb-system.svc -vvv
or dig https://webhook-service.metallb-system.svc
I'm seeing the same issue on two 22.10 ubuntu systems, I opened a thread in discourse: https://discuss.kubernetes.io/t/error-enabling-metallb-internal-error-context-deadline-exceeded/22092/
I repeated my same steps on an Ubuntu 22.04 LTS install, and this time it all worked.
More specifically, I retried a simpler case in VMs first, without involving metallb
, and found out that the connection to a service ip was flaky, and only worked quickly when the endpoint it was hitting happened to be on the same node. I retested that scenario with ubuntu 22.10 and 22.04, and it consistently failed when the OS was ubuntu 22.10.
I'm having the same issues with 3 miniforum nucs and Ubuntu 22.10.
looking at the discussion panlinux posted it looks like you were having webhook error
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": context deadline exceeded
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "l2advertisementvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-l2advertisement?timeout=10s": context deadline exceeded
try running
kubectl -n metallb get validatingwebhookconfiguration -o yaml
and see if failurePolicy is set to Fail, I believe you can set it to ignore.
It looks like it might be getting hung on proxy? so that might be another option to look into
and see if failurePolicy is set to Fail, I believe you can set it to ignore.
It's Fail
indeed, but before ignoring an error it's important to understand why it's happening, and only in ubuntu kinetic. It works in jammy.
It looks like it might be getting hung on proxy? so that might be another option to look into
No proxy. This can be easily replicated in a kinetic vm. I just did it now, with microk8s 1.25.4 and two kinetic vms.
Hi @panlinux, this could be related to a vxlan bug that breaks checksum calculation.
Could you try to see whether:
microk8s kubectl patch felixconfigurations default --patch '{"spec":{"featureDetectOverride":"ChecksumOffloadBroken=true"}}' --type=merge
helps with your issue?
It seems to help to put e.g. metallb-system.svc
into the set ofno_proxy
variables
$ cat /etc/environment
...
NO_PROXY=127.0.0.1,::1,localhost,10.152.183.0/24,10.1.0.0/16,metallb-system.svc
no_proxy=127.0.0.1,::1,localhost,10.152.183.0/24,10.1.0.0/16,metallb-system.svc
...
as suggested somewhere above and in this metallb repo issue.(I had activated the DNS addon before activating the metallb addon, i.e. microk8s enable dns
if that is important.)
@neoaggelos thanks for the workaround. I just tried your fix and it works. My setup
4 x RPI 4B running on 22.10 server with DNS addon.
I patched felixconfigurations
and enabled metallb and not seeing any errors.
I had existing BGP configurations and all are working as expected. Thank you 👍
spec: KUBE_VER="v1.29" METALLB_VER="v0.13.12" CALICO_VER="v3.27.0" Ubuntu 22.04.3 LTS The same unreachable error Cause: Networking misconfiguration Check your firewall and connectivity before proceeding with any installations. in my case it was ICMP unreachable with direct ip and isnt routable from master to slaves. that answers the why @panlinux
Summary
Enabling Addon metallb throws error:
Services with Type "LoadBalancer" are in state "pending": k get svc:
default test LoadBalancer 10.152.183.52 <pending> 80:31110/TCP 44m
What Should Happen Instead?
No errors while activating metallb-Addon and Services with type LoadBalancer should get an IP
Reproduction Steps
Following the Guide: https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview 1) Install Ubuntu on PIs 2) Edited cmdline-file with cgroup_enable=memory cgroup_memory=1 -> whole line looks like:
cgroup_enable=memory cgroup_memory=1 console=serial0,115200 dwc_otg.lpm_enable=0 console=tty1 root=LABEL=writable rootfstype=ext4 rootwait fixrtc quiet splash
3)reboot 4) install microk8s via snap 5) Build Cluster via microk8s 6) Enabling Microk8s Addons:
Pods seem to run fine:
Introspection Report
After inspect there is an error:
The memory cgroup is not enabled, but it should be
-> (please look at Reproduction Steps)Can you suggest a fix?
I tried the same setup, but with Ubuntu 20.04.5: no Errors, Services with Type LoadBalancer are receiving an IP. So the the error must have something to do with Ubuntu 22.10
Are you interested in contributing with a fix?
I would like to help, but dont know how
inspection-report-20221027_133229.tar.gz