mongodb / helm-charts

Apache License 2.0
99 stars 92 forks source link

Topology Spread Constraint support #339

Closed zamsong123 closed 2 weeks ago

zamsong123 commented 3 months ago

What did you do to encounter the bug? Steps to reproduce the behavior: When I install the helm chart on EKS, I can't find the "topologySpreadConstraints:" related config

What did you expect? When we deploy the helm chart to the AWS EKS env, we want the pods to run on different Availability Zone.

What happened instead? They are not running on separate AZ

Screenshots If applicable, add screenshots to help explain your problem.

Kubernetes Cluster Information

Additional context Add any other context about the problem here.

If possible, please include:

zamsong123 commented 3 months ago

I will make a contribution to this issue. starting now

zamsong123 commented 3 months ago
did some testing as below:
sample-app % helm template .
---
# Source: sample-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-sample-app
  labels:
    helm.sh/chart: sample-app-0.1.0
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
---
# Source: sample-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-sample-app-frontend
  labels:
    helm.sh/chart: sample-app-0.1.0
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
    app: release-name-sample-app-frontend
spec:
  type: ClusterIP
  ports:
    - port: 3000
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app: release-name-sample-app-frontend
---
# Source: sample-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-sample-app-backend
  labels:
    helm.sh/chart: sample-app-0.1.0
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
    app: release-name-sample-app-backend
spec:
  type: ClusterIP
  ports:
    - port: 8000
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app: release-name-sample-app-backend
---
# Source: sample-app/templates/deployment-backend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-sample-app-backend
  labels:
    helm.sh/chart: sample-app-0.1.0
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-app
      app.kubernetes.io/instance: release-name
      app: release-name-sample-app-backend
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-app
        app.kubernetes.io/instance: release-name
        app: release-name-sample-app-backend
    spec:
      serviceAccountName: release-name-sample-app
      topologySpreadConstraints:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - mangodb1
          maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
      securityContext:
        {}
      containers:
        - name: sample-app
          securityContext:
            {}
          image: "quay.io/mongodb/farm-intro-backend:0.1"
          imagePullPolicy: Always
          command:
            - python3
            - main.py
          ports:
            - name: http
              containerPort: 8000
              protocol: TCP
          env:
            - name: DB_URL
              valueFrom:
                secretKeyRef:
                  name: <resource-name>-<database>-<user>
                  key: connectionString.standard
            - name: DB_NAME
              value: 'admin'
          resources:
            {}
---
# Source: sample-app/templates/deployment-frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-sample-app-frontend
  labels:
    helm.sh/chart: sample-app-0.1.0
    app.kubernetes.io/name: sample-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
    app: release-name-sample-app-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-app
      app.kubernetes.io/instance: release-name
      app: release-name-sample-app-frontend
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-app
        app.kubernetes.io/instance: release-name
        app: release-name-sample-app-frontend
    spec:
      serviceAccountName: release-name-sample-app
      topologySpreadConstraints:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - mangodb1
          maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
      securityContext:
        {}
      containers:
        - name: sample-app
          securityContext:
            {}
          image: "quay.io/mongodb/farm-intro-frontend:0.1"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 3000
              protocol: TCP
          env:
          - name: DANGEROUSLY_DISABLE_HOST_CHECK
            value: 'true'
          - name: SVC_BACKEND
            value: release-name-sample-app-backend:8000

          resources:
            {}
github-actions[bot] commented 1 month ago

This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days.

github-actions[bot] commented 2 weeks ago

This issue was closed because it became stale and did not receive further updates. If the issue is still affecting you, please re-open it, or file a fresh Issue with updated information.