Closed ecsimsw closed 1 year ago
MetalLB in layer 2 mode is deploying on each node a Speaker Pod which responds to ARP(IPv4) and NDP(IPv6) requests.
If you now connect to the IP, which your Kubernetes Service with type: LoadBalancer got from the range you have defined in the MetalLB configuraton, your client will send out an arp-request who-has
It does not mean, that your Pod is running on that Node which the Mac-Address is resolved, only the MetalLB "leader" is running on this one. Your request will pass then over to the Kube-Proxy which is aware where your Pod lives.
L2 metallb : Speaker를 Daemon set으로 갖고 있다가 ARP 요청에 Master Speaker 가 ip table를 확인해서 특정 ip에 라우팅 노드 Mac 주소로 응답. 그 external ip를 갖는 노드는 분산 스케줄하여 한 노드에 요청이 몰리는 상황을 피함. 노드가 죽으면 Speaker master 선출이 다시 일어나고, 각 노드 별 ip table이 수정되겠다.
ping 192.168.0.120
PING 192.168.0.120 (192.168.0.120) 56(84) bytes of data.
From 192.168.0.102 icmp_seq=2 Redirect Host(New nexthop: 120.0.168.192)
metallb로 할당 받은 external ip로 ping 쏘면 k8s 노드 중 하나로 redirect 되는 것을 확인할 수 있다.
SO :
https://stackoverflow.com/questions/68224438/how-metallb-works-in-kubernetes
악분 :https://malwareanalysis.tistory.com/271
DevNote :https://andrewpage.tistory.com/200