bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.81k stars 9.1k forks source link

[bitnami/etcd] Network policy ingress rules are overriden by metrics rule #27981

Closed vipulagarwal closed 2 weeks ago

vipulagarwal commented 1 month ago

Name and Version

bitnami/etcd-10.2.6

What architecture are you using?

amd64

What steps will reproduce the bug?

Set metrics enabled to true

Change the following values under networkPolicy:

Are you using any custom parameters or values?

networkPolicy:
  allowExternal: false
  ingressNSMatchLabels:
    mylabel: true

metrics:
  enabled: true

What is the expected behavior?

Only namespace with mylabel=true are allowed to connect to etcd port 2379

What do you see instead?

All namespaces can connect to etcd when metrics are enabled within the helm chart.

This happens because of the following template code: https://github.com/bitnami/charts/blob/etcd/10.2.6/bitnami/etcd/templates/networkpolicy.yaml#L76-L80

Additional information

I appreciate the intention to automate network policy when prometheus metrics are enabled and promote metrics scrapping. However, this logic is overriding the network policy restrictions intended by ingressNSMatchLabels

vipulagarwal commented 1 month ago

A rendered template for the above example:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: release-name-etcd
  namespace: "default"
  labels:
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: etcd
    app.kubernetes.io/version: 3.5.14
    helm.sh/chart: etcd-10.2.6
    app.kubernetes.io/component: etcd
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/name: etcd
      app.kubernetes.io/component: etcd
  policyTypes:
    - Ingress
    - Egress
  egress:
    - {}
  ingress:
    # Allow inbound connections
    - ports:
        - port: 2379
        - port: 2380
      from:
        - podSelector:
            matchLabels:
              release-name-etcd-client: "true"
        - podSelector:
            matchLabels:
              app.kubernetes.io/instance: release-name
              app.kubernetes.io/name: etcd
              app.kubernetes.io/component: etcd
        - namespaceSelector:
            matchLabels:
              "mylabel": "true"
    # Allow prometheus scrapes for metrics
    - ports:
        - port: 2379

The last three lines adds an ingress rule that overrides the rule above it.

rafariossaa commented 1 month ago

Hi, In this case, I would suggest you to disable networkPolicy and use extraDeploy to use your custom networkPolicy.

vipulagarwal commented 1 month ago

Hi @rafariossaa, thanks for pointing out extraDeploy, this can help in my use case. However, I still believe the implementation for the metrics network policy conditional is incorrect and overcompensating to just allow metrics to be scrapped by promethues. It essentially overrides ingressNSMatchLabels and extraIngress for etcd port 2379 which can be used in an important use case when the user can not afford to setup authentication on etcd. Perhaps, there should be an extra field to specify a pod or namespace label for users' prometheus installations in the metrics network policy. Happy to work on a PR if this makes sense. Please correct me if I am missing something here.

rafariossaa commented 1 month ago

Hi, Thanks for your offering. Let see if more issues are open on this topic and take a decision then. I think this case of things are very dependent on the environment, how and where you have deployed prometheous (and others), so in each case the needs for the network policies are going to be different. I think there is not silver bullet for this, hence other mechanism to do what it is needed is provided.

github-actions[bot] commented 3 weeks ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 2 weeks ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.