cloudnative-pg / charts

CloudNativePG Helm Charts
Apache License 2.0
181 stars 84 forks source link

[Feature] optional network policy for the operator #101

Open jcpunk opened 1 year ago

jcpunk commented 1 year ago

The prometheus node-exporter includes an optional default network policy in their helm chart.

It would be nice if a policy that permits only the required access to the operator could be optionally enabled. https://cloudnative-pg.io/documentation/1.19/security/#exposed-ports

This request specifically ignores any Clusters created by the operator.

For egress perhaps something like:

  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
    - port: 443
      protocol: TCP
winston0410 commented 1 year ago

Yes indeed having an optional network policy will be helpful. I want to implement this as well, is PR welcomed in this repo? If so maybe I can give it a try

phisco commented 1 year ago

@winston0410 sure, PRs are definitely welcome! We kind of try to keep the chart as lean as possible, so unfortunately sometimes we have to reject some PR, but this one feels like a totally valid addition!

winston0410 commented 1 year ago

Sure, this is my first attempt:

https://editor.networkpolicy.io/?id=c0jMGv4TmUc9l0hV

Not 100% sure about the egress, it will be great if someone who know the project better can help

sando38 commented 7 months ago

I have these egress rules:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-cloudnative-pg-policies
  namespace: cloudnative-pg
spec:
  podSelector:
    matchLabels:
      app: cloudnative-pg-operator
  policyTypes:
  - Egress
  egress:
  - # k8s' coreDNS
    to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - protocol: UDP
      port: 53

  - # k8s API server for leases
    ports:
    - protocol: TCP
      port: 6443

  - # namespace database
    to:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          cnpg.io/podRole: instance
    ports:
    - protocol: TCP
      port: 5432 # Postgres
    - protocol: TCP
      port: 8000 # Status

And these for Ingress including metrics collection:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-cloudnative-pg-policies
  namespace: cloudnative-pg
spec:
  podSelector:
    matchLabels:
      app: cloudnative-pg-operator
  policyTypes:
  - Ingress
  ingress:
  - # CnPG webhook server
    ports:
    - protocol: TCP
      port: 9443

  - # VMagent for metrics scraping
    from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
      podSelector:
        matchLabels:
          app: vmagent
    ports:
    - protocol: TCP
      port: 8080
sando38 commented 7 months ago

I do not use a pgbouncer/pooler hence I am not sure if the above will block connections to pgbouncer/pooler ;)