Open pfisterer opened 5 years ago
I did some more testing and the security groups that are generated and assigned somehow "do not work". When deploying applications to k8s, inter-pod communication does not work as expected. For instance, I deployed Hadoop using helm install stable/hadoop --name=hadoop --namespace=hadoop --set hdfs.dataNode.replicas=3
and the pods go into CrashLoopBackOff
state.
After adding All TCP
, All UDP
, and All ICMP
ingress traffic in the security group sg-k8s-nodes
, everything works as expected.
However, I do not really have a clue why the existing rules would prevent pods to communicate with each other. Any ideas?
Currently, I run this playbook after creating the k8s cluster as a workaround:
---
- hosts: localhost
tasks:
- name: Allow any TCP ingress traffic to nodes
os_security_group_rule:
security_group: "sg-{{ lookup('env','NAME') | default('k8s', true) }}-nodes"
protocol: tcp
- name: Allow any UDP ingress traffic to nodes
os_security_group_rule:
security_group: "sg-{{ lookup('env','NAME') | default('k8s', true) }}-nodes"
protocol: udp
- name: Allow any ICMP ingress traffic to nodes
os_security_group_rule:
security_group: "sg-{{ lookup('env','NAME') | default('k8s', true) }}-nodes"
protocol: icmp
I run some services in the k8s cluster of
Type: LoadBalancer
.Everything works fine and is set up correctly (i.e., a new OpenStack LBaaS v2 load balancer is created). I can connect to the service from within the k8s cluster. However, I cannot access the service from outside the cluster. Only after allowing ingress traffic in the
sg-k8s-nodes
security group, everything works as expected.I would suggest to provide an addition configuration environment variable containing the name of additional security groups that should be added to the master and nodes. These could then selectively allow access to the services. If this sounds sensible, I could provide a pull request.