Closed chebykinn closed 2 years ago
Hm, it looks like init container uses wrong HOST_IP, I have 3 nodes kubernetes configuration:
NAME STATUS ROLES AGE VERSION
ingress-1 Ready ingress 3d18h v1.21.6
master-1 Ready control-plane,master 3d18h v1.21.6
worker-1 Ready worker 3d18h v1.21.6
When injecting consul into custom apps, connect-init is using worker-1
IP address, which is correct, but when ingress-nginx is starting, HOST_IP is ingress-1
's address, but there is no consul agent listening on that node
So, I've figured out my problem, Consul needs to install each client pods to each node and it uses DaemonSet to achieve that (as stated here https://www.consul.io/docs/k8s#client-agents). My ingress-1
node configuration has taint which excludes it from DaemonSet:
node_labels:
node-role.kubernetes.io/ingress: "true"
node_taints:
- "node-role.kubernetes.io/ingress=:NoSchedule"
In order to allow consul to schedule a pod on that node I've should've added tolerations like I had in my nginx config:
client:
enabled: true
tolerations: |
- key: "node-role.kubernetes.io/ingress"
operator: "Exists"
Community Note
Overview of the Issue
I'm trying to integrate with nginx ingress via transparent proxy and consul-connect-inject-init sidecar is unable to start with an error:
It looks like it tries to use host IP instead of pod IP when connecting to consul, but I don't understand why, it injects just fine into my app.
Reproduction Steps
Logs
Expected behavior
Environment details
k8s:
CNI: calico Consul: v1.11.1
Additional Context
I've actually got this working before, but I can't pinpoint what changed when I recreated kubernetes cluster.