Open munnerz opened 6 years ago
For elasticsearch I'm using a policy like this.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: elasticsearch-in-cluster
spec:
podSelector:
matchLabels:
navigator.jetstack.io/elasticsearch-cluster-name: logging
ingress:
- ports:
- protocol: TCP
port: 9300
from:
- podSelector:
matchLabels:
navigator.jetstack.io/elasticsearch-cluster-name: logging
- ports:
- protocol: TCP
port: 9200
from:
- podSelector:
matchLabels:
k8s-app: fluentd
- podSelector:
matchLabels:
app: logging-elasticsearch-kibana
- podSelector:
matchLabels:
navigator.jetstack.io/elasticsearch-cluster-name: logging
egress:
- ports:
- protocol: TCP
port: 9300
to:
- podSelector:
matchLabels:
navigator.jetstack.io/elasticsearch-cluster-name: logging
# For in cluster kube-apiserver access for leader election and status reporting
- ports:
- protocol: TCP
port: 443
policyTypes:
- Ingress
- Egress
There is only one user dependent configuration, and that is the list of from
to allow to the elasticsearch api. Here it is
- podSelector:
matchLabels:
k8s-app: fluentd
- podSelector:
matchLabels:
app: logging-elasticsearch-kibana
Will need to include DNS ports if remote reindexing by dns is desired. Currently remote reindexing isn't possible because the elasticsearcy.yaml needs configuration to set a whitelist of allowed remotes.
Apparently there is a new dependency on DNS that did not exist in the standalone pilot & helm install.
I had to add network policy entries for DNS:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Within an Elasticsearch or Cassandra cluster, we can restrict the network traffic between cluster nodes from external access. This could help establish a base level of security before full mTLS is enabled.
This will involve us modifying the respective controllers to automatically create NetworkPolicy resources. We'll also need to update our e2e tests to deploy a NetworkPolicy enabled kubernetes cluster so we can test this.
/kind feature