I'm on platform generic with metallb and tried to activate the oidc authentication plugin with kube-prods key cloak. This didn't work because the kube-apiserver could not communicate with keycloaks nginx-ingress. I analysed this further and found the following issue: https://github.com/metallb/metallb/issues/153.
The issue comes up, when services published with externalTrafficPolicy: "Local" should be accessed from within the cluster. This doesn't work with kube-proxy in ipvs mode.
After some debugging I found, the nginx-ingress service is created from kube-prod with externalTrafficPolicy: "Local" enabled. As the kube-prod services published with the built in nginx-ingress are used for management, maybe it's better to set externalTrafficPolicy to Cluster. In my case I'm using a second nginx-ingress controller with an ingress class for the external services where I may need the client IPs.
Please consider to set externalTrafficPolicy to Cluster.
Hi,
I'm on platform generic with metallb and tried to activate the oidc authentication plugin with kube-prods key cloak. This didn't work because the kube-apiserver could not communicate with keycloaks nginx-ingress. I analysed this further and found the following issue: https://github.com/metallb/metallb/issues/153.
The issue comes up, when services published with
externalTrafficPolicy: "Local"
should be accessed from within the cluster. This doesn't work with kube-proxy in ipvs mode.After some debugging I found, the nginx-ingress service is created from kube-prod with
externalTrafficPolicy: "Local"
enabled. As the kube-prod services published with the built in nginx-ingress are used for management, maybe it's better to setexternalTrafficPolicy
toCluster
. In my case I'm using a second nginx-ingress controller with an ingress class for the external services where I may need the client IPs.Please consider to set
externalTrafficPolicy
toCluster
.Cheers, floek