projectsveltos / addon-controller

Sveltos Kubernetes add-on controller programmatically deploys add-ons and applications in tens of clusters. Support for ClusterAPI powered clusters, Helm charts, kustomize ,YAMLs. Sveltos has built-in support for multi-tenancy.
https://projectsveltos.github.io/sveltos/
Apache License 2.0
283 stars 21 forks source link

Resources created by cluster profile on the management/local cluster are not deleted when the workload cluster is deleted #719

Closed pacharya-pf9 closed 1 month ago

pacharya-pf9 commented 1 month ago

Problem Description

Create a cluster profile similar to -

apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
  name: cluster-orchestrator
spec:
  clusterSelector:
    matchLabels:
      core-addons: enabled
  continueOnConflict: false
  policyRefs:
  - deploymentType: Local
    kind: ConfigMap
    name: cluster-orchestrator-configmap
    namespace: kube-system
  reloader: false
  stopMatchingBehavior: WithdrawPolicies
  syncMode: ContinuousWithDriftDetection
  tier: 100

Relevant details of the configmap referenced above -

apiVersion: v1
data:
  stack.yaml: |-
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: {{ .Cluster.metadata.name }}-cluster-orchestrator
      name: {{ .Cluster.metadata.name }}-cluster-orchestrator
      namespace: {{ .Cluster.metadata.namespace }}
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: {{ .Cluster.metadata.name }}-cluster-orchestrator
      template:
        metadata:
          labels:
            app: {{ .Cluster.metadata.name }}-cluster-orchestrator
        spec:
          containers:
          - image: blah:v1.0.0
            imagePullPolicy: Always
            name: stack
            resources:
              limits:
                cpu: "1"
                memory: 1000Mi
              requests:
                cpu: 100m
                memory: 256Mi
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          serviceAccount: {{ .Cluster.metadata.name }}-cluster-orchestrator
          serviceAccountName: {{ .Cluster.metadata.name }}-cluster-orchestrator
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: {{ .Cluster.metadata.name }}-cluster-orchestrator
      namespace: {{ .Cluster.metadata.namespace }}
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: {{ .Cluster.metadata.name }}-orchestrator-rolebinding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: {{ .Cluster.metadata.name }}-cluster-orchestrator
      namespace: {{ .Cluster.metadata.namespace }}
kind: ConfigMap
metadata:
  annotations:
    projectsveltos.io/template: "true"
  labels:
    app.kubernetes.io/instance: emp-qa-capi-clusterprofile
  name: cluster-orchestrator-configmap
  namespace: kube-system

Create a cluster that matches the label - core-addons=enabled. Once the cluster is created, we see that the pod is created properly for the workload cluster. Delete the cluster and we observed that the pod created by above profile for the cluster is not deleted. This causes 2 problems -

  1. Unnecessary pods left behind on the management cluster
  2. Reusing cluster name leads to unexpected behavior since the old pod with same name already exists

The clustersummary and clusterreport resources are correctly deleted after the CAPI cluster is deleted.

System Information

CLUSTERAPI VERSION: v1.7.4 SVELTOS VERSION: v0.38.4 KUBERNETES VERSION: v1.30

gianlucam76 commented 1 month ago

Release v0.39.0