Closed sstubbs closed 2 years ago
@sstubbs is this something we can ask the folks at metallb?
I'm having the same problem with nginx. I asked on MetalLB's Slack channel but they seemed to think MetalLB has nothing to do with this and it's more likely a problem elsewhere. I ended up filing kubernetes/kubernetes#94563 which has more details.
Sorry for the late reply to this. I've been trying everything I can with this. I think it's an istio problem not metallb or microk8s as metallb passes IP through properly when using loadbalancers. I have reinstalled microk8s a few times trying different variations and still can't get the source ips to show even using NodePort.
I've posted this bug as I've had no response on the istio discuss page regarding this. https://github.com/istio/istio/issues/27313
@sstubbs thank you for digging deep into this. Very well appreciated.
It seems that if I enable ha-cluster in 1.19 stable. Then NodePort services do not retain source ips. This is unrelated to the istio and metallb issues. Could it be some misconfiguration of calico in the addon or am I the only one experiencing this? Every connections shows ip 169.254.0.2
@ktsakalozos do you think I should create another issue for the calico ip issue? I believe this is an issue with the ha-cluster addon. I have to disable ha-cluster with a clean 1.19 stable install to get the source ips properly for traffic received via nodeport. Then this issue could probably be closed as the istio and metallb issues are separate.
I created this issue https://github.com/projectcalico/calico/issues/4019 regarding the calico issue I'm having.
I've just installed microk8s 1.19/candidate and this issue seems to be resolved. I am getting the correct ips.
I have a similar issue and tried upgrading from v1.19.3 to v1.19.4 (1.19/stable -> 1.19/edge). I've also disabled ha-cluster, which seems to have moved the problem - the IP now shows as the .1 in the LB Endpoint IP subnet whereas previously it was the Node Internal IP.
Name: crpd-theshire-8598749594-p7ppm
Namespace: default
Priority: 0
Node: ubuntu/100.123.35.0
Start Time: Sun, 15 Nov 2020 17:48:27 -0800
Labels: app=crpd-theshire
pod-template-hash=8598749594
Annotations: <none>
Status: Running
IP: 10.1.53.8
Name: crpd-theshire
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=crpd-theshire
Type: LoadBalancer
IP: 10.152.183.10
IP: 172.18.35.252
LoadBalancer Ingress: 172.18.35.252
Port: bgp 179/TCP
TargetPort: 179/TCP
NodePort: bgp 30959/TCP
Endpoints: 10.1.53.8:179
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30379
Traffic from outside the [single node] cluster is the container as sourced from 10.1.53.0.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi,
I'm not sure if this is the right place to ask this but all traffic using the istio ingress gateway shows the ip of the ingress gateway not the external one. Has anybody else had this issue?
I've tried patching it like this:
suggested here https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/
and according to the metal lb docs https://metallb.universe.tf/usage
externalTrafficPolicy
is supposed to be supported. I'm not really sure where this problem should be reported as it seems to be a combination of cni, metallb and istio.This is on the stable 1.19 channel using etcd and flannel.