bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

[bitnami/kubeapps] installs postgesql when helm repos are enabled, but postgresql.enable=false is being used #16260

Closed zentavr closed 1 year ago

zentavr commented 1 year ago

Name and Version

bitnami/kubeapps 12.3.1

What architecture are you using?

arm64

What steps will reproduce the bug?

I would like to use pre-installed RDS aurora for Kubeapps installation and avoid PostgreSQL in my AWS Fargate cluster. When I have packaging.helm.enabled of true and postgresql.enabled is false - the chart installs PostgreSQL.

Probably there is a condition in Chart.yaml which forces to do that.

Are you using any custom parameters or values?

---
# helm upgrade kubeapps bitnami/kubeapps --install --create-namespace --namespace kube-apps --values ./values-kubeapps.yml
packaging: {
    helm: {
        enabled: false
    },
    carvel: {
        enabled: false
    },
    flux: {
        enabled: false
    }
}
ingress: {
    enabled: true,
    hostname: "example.com",
    path: '/*',
    ingressClassName: 'alb',
    annotations: {},
}

#authProxy: {
#    enabled: true,
#    provider: 'oidc',
#    clientID: props.oidcClientId,
#    clientSecret: props.oidcClientSecret,
#    cookieSecret: "",
#    scope: "openid email groups",
#    // https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview/
#    extraFlags: [
#        "--oidc-issuer-url=" + props.oidcIssuerUrl,
#    ]
#}

apprepository: {
    "nodeSelector": {},
    "tolerations": [],
}
frontend: {
    "nodeSelector": {},
    "tolerations": [],
}
dashboard: {
    "nodeSelector": {},
    "tolerations": [],
}
kubeappsapis: {
    "nodeSelector": {},
    "tolerations": [],
}
postgresql: {
    enabled: false,
    auth: {
        username: "wifimap",
        database: "kubeapps",
        existingSecret: 'postgres-aurora-rds'
    }
}
externalDatabase: {
    host: "website-staging-db.cluster-123.us-east-1.rds.amazonaws.com",
    port: "5432"
}
extraDeploy: [
    {
        apiVersion: 'v1',
        kind: 'Secret',
        metadata: {
            name: 'postgres-aurora-rds'
        },
        data: {
            'postgres-password': "123="
        }
    },
    {
        apiVersion: 'v1',
        kind: 'Secret',
        metadata: {
            name: 'sso-oidc-secret'
        },
        data: {
            clientID: "123=",
            clientSecret: "123="
        }
    },
    {
        apiVersion: 'rbac.authorization.k8s.io/v1',
        kind: 'Role',
        metadata: {
            name: 'kubeapps-sso-secret-access-role'
        },
        rules: [
            {
                apiGroups: [ "" ],
                resourceNames: [ "sso-oidc-secret" ],
                resources: [ "secrets" ],
                verbs: [ "get", "list", "watch" ]
            }
        ]
    },
    {
        apiVersion: 'rbac.authorization.k8s.io/v1',
        kind: 'RoleBinding',
        metadata: {
            name: 'kubeapps-sso-secret-access-role-rolebinding'
        },
        roleRef: {
            apiGroup: "rbac.authorization.k8s.io",
            kind: "Role",
            name: "kubeapps-sso-secret-access-role"
        },
        subjects: [
            {
                kind: 'ServiceAccount',
                name: 'aws-load-balancer-controller',
                namespace: 'kube-system'
            }
        ]
    }
]

What is the expected behavior?

PostgreSQL pods/services are not installed.

What do you see instead?

PostgreSQL is installed:

$ kubectl get all -n kube-apps
NAME                                                              READY   STATUS                       RESTARTS   AGE
pod/apprepo-kube-apps-sync-bitnami-zl4g7-h2ppm                    1/1     Running                      0          66s
pod/kubeapps-5d78b89d8f-24jdb                                     1/1     Running                      0          114s
pod/kubeapps-5d78b89d8f-t2qhc                                     1/1     Running                      0          114s
pod/kubeapps-internal-apprepository-controller-5bd9d65874-rd77m   1/1     Running                      0          114s
pod/kubeapps-internal-dashboard-58b6b87d67-4f2pk                  1/1     Running                      0          114s
pod/kubeapps-internal-dashboard-58b6b87d67-wx77f                  1/1     Running                      0          114s
pod/kubeapps-internal-kubeappsapis-6b9559b9fb-6cgrd               1/1     Running                      0          114s
pod/kubeapps-internal-kubeappsapis-6b9559b9fb-tk65t               1/1     Running                      0          114s
pod/kubeapps-postgresql-0                                         0/1     CreateContainerConfigError   0          114s

NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubeapps                         ClusterIP   10.100.52.214    <none>        80/TCP     115s
service/kubeapps-internal-dashboard      ClusterIP   10.100.17.160    <none>        8080/TCP   115s
service/kubeapps-internal-kubeappsapis   ClusterIP   10.100.248.246   <none>        8080/TCP   115s
service/kubeapps-postgresql              ClusterIP   10.100.252.127   <none>        5432/TCP   115s
service/kubeapps-postgresql-hl           ClusterIP   None             <none>        5432/TCP   115s

NAME                                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kubeapps                                     2/2     2            2           116s
deployment.apps/kubeapps-internal-apprepository-controller   1/1     1            1           116s
deployment.apps/kubeapps-internal-dashboard                  2/2     2            2           116s
deployment.apps/kubeapps-internal-kubeappsapis               2/2     2            2           116s

NAME                                                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/kubeapps-5d78b89d8f                                     2         2         2       117s
replicaset.apps/kubeapps-internal-apprepository-controller-5bd9d65874   1         1         1       117s
replicaset.apps/kubeapps-internal-dashboard-58b6b87d67                  2         2         2       117s
replicaset.apps/kubeapps-internal-kubeappsapis-6b9559b9fb               2         2         2       117s

NAME                                   READY   AGE
statefulset.apps/kubeapps-postgresql   0/1     118s

NAME                                           SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/apprepo-kube-apps-sync-bitnami   */10 * * * *   False     0        <none>          70s

NAME                                             COMPLETIONS   DURATION   AGE
job.batch/apprepo-kube-apps-sync-bitnami-zl4g7   0/1           71s        71s

Additional information

No response

dgomezleon commented 1 year ago

Hi @zentavr ,

Exactly, it is due to the condition added in Chart.yaml. Would you like to contribute by creating a PR to solve the issue? The Bitnami team will be happy to review it and provide feedback. Here you can find the contributing guidelines.

github-actions[bot] commented 1 year ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 1 year ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.