bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.85k stars 9.14k forks source link

StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory #9338

Closed ngingihy closed 2 years ago

ngingihy commented 2 years ago

Name and Version

bitnami/kong

What steps will reproduce the bug?

I had Kong running normally on a previous namespace that i provisioned previously. Now i changed the namespace, it should have the same specs as the old one. I keep getting that error failed quota: ns-resource-quota: must specify limits.cpu,limits.memory. I'm using the exact same values.yaml file

These are namespace events (below). Looks like It doesn't happen for all pods.


8s          Normal    Started             pod/my-release-kong-migrate-z26ml       Started container kong-migrate
**56m         Warning   FailedCreate        job/my-release-kong-migrate             Error creating: pods "my-release-kong-migrate-h4zp7" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory**
52m         Normal    SuccessfulCreate    job/my-release-kong-migrate             Created pod: my-release-kong-migrate-vr5mk
38m         Normal    SuccessfulCreate    job/my-release-kong-migrate             Created pod: my-release-kong-migrate-6bbww
6m10s       Normal    SuccessfulCreate    job/my-release-kong-migrate             Created pod: my-release-kong-migrate-tstn5
4m16s       Normal    SuccessfulCreate    job/my-release-kong-migrate             Created pod: my-release-kong-migrate-mlqlg
11s         Normal    SuccessfulCreate    job/my-release-kong-migrate             Created pod: my-release-kong-migrate-z26ml
52m         Normal    ScalingReplicaSet   deployment/my-release-kong              Scaled up replica set my-release-kong-7787797cd4 to 2
38m         Normal    ScalingReplicaSet   deployment/my-release-kong              Scaled up replica set my-release-kong-7787797cd4 to 2
4m18s       Normal    ScalingReplicaSet   deployment/my-release-kong              Scaled up replica set my-release-kong-7787797cd4 to 2
13s         Normal    ScalingReplicaSet   deployment/my-release-kong              Scaled up replica set my-release-kong-79f4c86c9b to 2
56m         Warning   FailedCreate        statefulset/my-release-postgresql       create Pod my-release-postgresql-0 in StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory
47m         Warning   FailedCreate        statefulset/my-release-postgresql       create Pod my-release-postgresql-0 in StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory
17m         Warning   FailedCreate        statefulset/my-release-postgresql       create Pod my-release-postgresql-0 in StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory
3m37s       Warning   FailedCreate        statefulset/my-release-postgresql       create Pod my-release-postgresql-0 in StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory
**2s          Warning   FailedCreate        statefulset/my-release-postgresql       create Pod my-release-postgresql-0 in StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory**

Are you using any custom parameters or values?

## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
##
global:
  imageRegistry: "privateregistry.com"
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: ['test']
  storageClass: ""

## @section Common parameters

## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set)
##
kubeVersion: ""
## @param nameOverride String to partially override kong.fullname template with a string (will prepend the release name)
##
nameOverride: ""
## @param fullnameOverride String to fully override kong.fullname template with a string
##
fullnameOverride: ""
## @param commonAnnotations Common annotations to add to all Kong resources (sub-charts are not considered). Evaluated as a template
##
commonAnnotations: {}
## @param commonLabels Common labels to add to all Kong resources (sub-charts are not considered). Evaluated as a template
##
commonLabels: {}
## @param clusterDomain Kubernetes cluster domain
##
clusterDomain: cluster.local
## @param extraDeploy Array of extra objects to deploy with the release (evaluated as a template).
##
extraDeploy: []

## Enable diagnostic mode in the deployment
##
diagnosticMode:
  ## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
  ##
  enabled: false
  ## @param diagnosticMode.command Command to override all containers in the deployment
  ##
  command:
    - sleep
  ## @param diagnosticMode.args Args to override all containers in the deployment
  ##
  args:
    - infinity

## @section Deployment parameters

## Bitnami kong image version
## ref: https://hub.docker.com/r/bitnami/kong/tags/
## @param image.registry kong image registry
## @param image.repository kong image repository
## @param image.tag kong image tag (immutable tags are recommended)
## @param image.pullPolicy kong image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
## @param image.debug Enable image debug mode
##
image:
  registry: privateregistry.com
  repository: team1/bitnamikong
  tag: latest
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## Example:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
  pullSecrets: ['test']
  ## Enable debug mode
  ##
  debug: false
## @param database Select which database backend Kong will use. Can be 'postgresql' or 'cassandra'
##
database: postgresql
## @param replicaCount Number of replicas of the kong Pod
##
replicaCount: 2
## @param hostAliases Add deployment host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param updateStrategy.type Set up update strategy for kong installation. Set to Recreate if you use persistent volume that cannot be mounted by more than one pods to makesure the pods is destroyed first.
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
## Example:
## updateStrategy:
##  type: RollingUpdate
##  rollingUpdate:
##    maxSurge: 25%
##    maxUnavailable: 25%
##
updateStrategy:
  type: RollingUpdate
## @param schedulerName Alternative scheduler
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param useDaemonset Use a daemonset instead of a deployment. `replicaCount` will not take effect.
##
useDaemonset: false
## @param extraVolumes Array of extra volumes to be added to the Kong deployment deployment (evaluated as template). Requires setting `extraVolumeMounts`
##
extraVolumes: []
## @param initContainers Add additional init containers to the pod (evaluated as a template)
## e.g.
## - name: your-image-name
##   image: your-image
##   imagePullPolicy: Always
##   ports:
##     - name: portname
##        containerPort: 1234
##
initContainers: []
## @param sidecars Attach additional containers to the pod (evaluated as a template)
## e.g.
## - name: your-image-name
##   image: your-image
##   imagePullPolicy: Always
##   ports:
##     - name: portname
##        containerPort: 1234
##
sidecars: []
## SecurityContext configuration
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
## @param containerSecurityContext.runAsUser Set Kong container's Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Kong container's Security Context runAsNonRoot
##
containerSecurityContext:
  runAsUser: 1001
  runAsNonRoot: true
## @param podSecurityContext Pod security context
##
podSecurityContext: {}
## @param nodeSelector Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param tolerations Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
  ## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ##
  type: ""
  ## @param nodeAffinityPreset.key Node label key to match Ignored if `affinity` is set.
  ## E.g.
  ## key: "kubernetes.io/e2e-az-name"
  ##
  key: ""
  ## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set.
  ## E.g.
  ## values:
  ##   - e2e-az1
  ##   - e2e-az2
  ##
  values: []
## @param affinity Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and  nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## @param podAnnotations Pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podLabels Pod labels
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Add an horizontal pod autoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
## @param autoscaling.enabled Deploy a HorizontalPodAutoscaler object for the Kong deployment
## @param autoscaling.apiVersion API Version of the HPA object (for compatibility with Openshift)
## @param autoscaling.minReplicas Minimum number of replicas to scale back
## @param autoscaling.maxReplicas Maximum number of replicas to scale out
## @param autoscaling.metrics [array] Metrics to use when deciding to scale the deployment (evaluated as a template)
##
autoscaling:
  enabled: false
  apiVersion: autoscaling/v2beta1
  minReplicas: 2
  maxReplicas: 5
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 80
## Kong Pod Disruption Budget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
## @param pdb.enabled Deploy a pdb object for the Kong pod
## @param pdb.maxUnavailable Maximum unavailable Kong replicas (expressed in percentage)
##
pdb:
  enabled: false
  maxUnavailable: "50%"

## @section Traffic Exposure Parameters

## Service parameters
##
service:
  ## @param service.type Kubernetes Service type
  ##
  type: LoadBalancer
  ## @param service.clusterIP Cluster internal IP of the service
  ## This is the internal IP address of the service and is usually assigned randomly.
  ## ref: https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec
  ##
  clusterIP: ""
  ## @param service.externalTrafficPolicy external traffic policy managing client source IP preservation
  ## default to "Cluster"
  ## set to "Local" in order to preserve the client source IP (only on service of type LoadBalancer or NodePort)
  ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
  ##
  externalTrafficPolicy: ""
  ## @param service.proxyHttpPort kong proxy HTTP service port port
  ##
  proxyHttpPort: 80
  ## @param service.proxyHttpsPort kong proxy HTTPS service port port
  ##
  proxyHttpsPort: 443
  ## @param service.exposeAdmin Add the Kong Admin ports to the service
  ##
  exposeAdmin: true
  ## @param service.adminHttpPort kong admin HTTPS service port (only if service.exposeAdmin=true)
  ##
  adminHttpPort: 8001
  ## @param service.adminHttpsPort kong admin HTTPS service port (only if service.exposeAdmin=true)
  ##
  adminHttpsPort: 8444
  ## @param service.disableHttpPort Disable Kong proxy HTTP and Kong admin HTTP ports
  ##
  disableHttpPort: false
  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ## @param service.proxyHttpNodePort Port to bind to for NodePort service type (proxy HTTP)
  ## @param service.proxyHttpsNodePort Port to bind to for NodePort service type (proxy HTTPS)
  ## @param service.adminHttpNodePort Port to bind to for NodePort service type (admin HTTP)
  ## @param service.adminHttpsNodePort Port to bind to for NodePort service type (admin HTTPS)
  ##
  proxyHttpNodePort: ""
  proxyHttpsNodePort: ""
  adminHttpNodePort: ""
  adminHttpsNodePort: ""
  ## @param service.loadBalancerIP loadBalancerIP if kong service type is `LoadBalancer`
  ## ref: https://kubernetes.io/docs/user-guide/services/#type-loadbalancer
  ##
  loadBalancerIP: ""
  ## @param service.annotations Annotations for kong service
  ## set the LoadBalancer service type to internal only.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
  ##
  annotations: {}
  ## @param service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
  ##
  extraPorts: []
## Configure the ingress resource that allows you to access the
## Kong installation. Set up the URL
## ref: https://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## @param ingress.enabled Enable ingress controller resource
  ##
  enabled: false
  ## DEPRECATED: Use ingress.annotations instead of ingress.certManager
  ## certManager: false
  ##

  ## @param ingress.pathType Ingress path type
  ##
  pathType: ImplementationSpecific
  ## @param ingress.apiVersion Force Ingress API version (automatically detected if not set)
  ##
  apiVersion: ""
  ## @param ingress.hostname Default host for the ingress resource
  ##
  hostname: kong.local
  ## @param ingress.path Ingress path
  ## with ALB ingress controllers.
  ##
  path: /
  ## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ## Use this parameter to set the required annotations for cert-manager, see
  ## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
  ##
  ## e.g:
  ## annotations:
  ##   kubernetes.io/ingress.class: nginx
  ##   cert-manager.io/cluster-issuer: cluster-issuer-name
  ##
  annotations: {}
  ## @param ingress.tls Create TLS Secret
  ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
  ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
  ##
  tls: false
  ## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record.
  ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
  ## extraHosts:
  ## - name: kong.local
  ##   path: /
  ##
  extraHosts: []
  ## @param ingress.extraPaths Additional arbitrary path/backend objects
  ## For example: The ALB ingress controller requires a special rule for handling SSL redirection.
  ## extraPaths:
  ## - path: /*
  ##   backend:
  ##     serviceName: ssl-redirect
  ##     servicePort: use-annotation
  ##
  extraPaths: []
  ## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record.
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  ## extraTls:
  ## - hosts:
  ##     - kong.local
  ##   secretName: kong.local-tls
  ##
  extraTls: []
  ## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  ## e.g:
  ## secrets:
  ## - name: kong.local-tls
  ##   key:
  ##   certificate:
  ##
  ##
  secrets: []

## @section Kong Container Parameters

kong:
  ## @param kong.command Override default container command (useful when using custom images)
  ##
  command: []
  ## @param kong.args Override default container args (useful when using custom images)
  ##
  args: []
  ## @param kong.initScriptsCM Configmap with init scripts to execute
  ## ConfigMap containing `/docker-entrypoint-initdb.d` scripts to be executed at initialization time (evaluated as a template)
  ##
  initScriptsCM: ""
  ## @param kong.initScriptsSecret Configmap with init scripts to execute
  ## Secret containing `/docker-entrypoint-initdb.d` scripts to be executed at initialization time (that contain sensitive data). Evaluated as a template.
  ##
  initScriptsSecret: ""
  ## @param kong.extraEnvVars Array containing extra env vars to configure Kong
  ## For example:
  ## extraEnvVars:
  ##  - name: GF_DEFAULT_INSTANCE_NAME
  ##    value: my-instance
  ##
  extraEnvVars: []
  ## @param kong.extraEnvVarsCM ConfigMap containing extra env vars to configure Kong
  ##
  extraEnvVarsCM: ""
  ## @param kong.extraEnvVarsSecret Secret containing extra env vars to configure Kong (in case of sensitive data)
  ##
  extraEnvVarsSecret: ""
  ## @param kong.extraVolumeMounts Array of extra volume mounts to be added to the Kong Container (evaluated as template). Normally used with `extraVolumes`.
  ##
  extraVolumeMounts: []
  ## @param kong.customLivenessProbe Override default liveness probe (kong container)
  ##
  customLivenessProbe: {}
  ## @param kong.customReadinessProbe Override default readiness probe (kong container)
  ##
  customReadinessProbe: {}
  ## Configure extra options for liveness probe
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param kong.livenessProbe.enabled Enable livenessProbe
  ## @param kong.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param kong.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param kong.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param kong.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param kong.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 120
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1
  ## Configure extra options for readiness probe
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param kong.readinessProbe.enabled Enable readinessProbe
  ## @param kong.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param kong.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param kong.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param kong.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param kong.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1
  ## @param kong.lifecycleHooks Lifecycle hooks (kong container)
  ## ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
  ##
  lifecycleHooks: {}
  ## Container resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
  ## resources, such as Minikube. If you do want to specify resources, uncomment the following
  ## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  ## @param kong.resources.limits The resources limits for the container
  ## @param kong.resources.requests The requested resources for the container
  ##
  resources:
    ## Example:
    limits:
       cpu: 500m
       memory: 1Gi
    # limits: {}
    ## Examples:
    requests:
       cpu: 100m
       memory: 128Mi
    # requests: {}

## @section Kong Migration job Parameters

migration:
  ## In case you want to use a custom image for Kong migration, set this value
  ## image:
  ##   registry:
  ##   repository:
  ##   tag:
  ##
  ## @param migration.command Override default container command (useful when using custom images)
  ##
  command: []
  ## @param migration.args Override default container args (useful when using custom images)
  ##
  args: []
  ## @param migration.hostAliases Add deployment host aliases
  ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  ##
  hostAliases: []
  ## @param migration.annotations [object] Add annotations to the job
  ##
  annotations:
    helm.sh/hook: post-install, post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  ## @param migration.extraEnvVars Array containing extra env vars to configure the Kong migration job
  ## For example:
  ## extraEnvVars:
  ##  - name: GF_DEFAULT_INSTANCE_NAME
  ##    value: my-instance
  ##
  extraEnvVars: []
  ## @param migration.extraEnvVarsCM ConfigMap containing extra env vars to configure the Kong migration job
  ##
  extraEnvVarsCM: ""
  ## @param migration.extraEnvVarsSecret Secret containing extra env vars to configure the Kong migration job (in case of sensitive data)
  ##
  extraEnvVarsSecret: ""
  ## @param migration.extraVolumeMounts Array of extra volume mounts to be added to the Kong Container (evaluated as template). Normally used with `extraVolumes`.
  ##
  extraVolumeMounts: []
  ## Container resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
  ## resources, such as Minikube. If you do want to specify resources, uncomment the following
  ## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  ## @param migration.resources.limits The resources limits for the container
  ## @param migration.resources.requests The requested resources for the container
  ##
  resources:
    ## Example:
    limits:
       cpu: 500m
       memory: 1Gi
    # limits: {}
    ## Examples:
    requests:
       cpu: 250m
       memory: 256Mi
    # requests: {}

## @section Kong Ingress Controller Container Parameters

ingressController:
  ## @param ingressController.enabled Enable/disable the Kong Ingress Controller
  ##
  enabled: false
  ## @param ingressController.customResourceDeletePolicy Add custom CRD resource delete policy (for Helm 2 support)
  ##
  customResourceDeletePolicy: {}
  ## @param ingressController.image.registry Kong Ingress Controller image registry
  ## @param ingressController.image.repository Kong Ingress Controller image name
  ## @param ingressController.image.tag Kong Ingress Controller image tag
  ## @param ingressController.image.pullPolicy kong ingress controller image pull policy
  ## @param ingressController.image.pullSecrets Specify docker-registry secret names as an array
  ##
  image:
    registry: docker.io
    repository: bitnami/kong-ingress-controller
    tag: 2.2.0-debian-10-r7
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## Example:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## @param ingressController.proxyReadyTimeout Maximum time (in seconds) to wait for the Kong container to be ready
  ##
  proxyReadyTimeout: 300
  ## @param ingressController.rbac.create Create the necessary Service Accounts, Roles and Rolebindings for the Ingress Controller to work
  ## @param ingressController.rbac.existingServiceAccount Use an existing service account for all the RBAC operations
  ##
  rbac:
    create: true
    existingServiceAccount: ""
  ## @param ingressController.ingressClass Name of the class to register Kong Ingress Controller (useful when having other Ingress Controllers in the cluster)
  ##
  ingressClass: kong
  ## @param ingressController.command Override default container command (useful when using custom images)
  ##
  command: []
  ## @param ingressController.args Override default container args (useful when using custom images)
  ##
  args: []
  ## @param ingressController.extraEnvVars Array containing extra env vars to configure Kong
  ## For example:
  ## extraEnvVars:
  ##  - name: GF_DEFAULT_INSTANCE_NAME
  ##    value: my-instance
  ##
  extraEnvVars: []
  ## @param ingressController.extraEnvVarsCM ConfigMap containing extra env vars to configure Kong Ingress Controller
  ##
  extraEnvVarsCM: ""
  ## @param ingressController.extraEnvVarsSecret Secret containing extra env vars to configure Kong Ingress Controller (in case of sensitive data)
  ##
  extraEnvVarsSecret: ""
  ## @param ingressController.extraVolumeMounts Array of extra volume mounts to be added to the Kong Ingress Controller container (evaluated as template). Normally used with `extraVolumes`.
  ##
  extraVolumeMounts: []
  ## @param ingressController.customLivenessProbe Override default liveness probe (kong ingress controller container)
  ##
  customLivenessProbe: {}
  ## @param ingressController.customReadinessProbe Override default readiness probe (kong ingress controller container)
  ##
  customReadinessProbe: {}
  ## Configure extra options for liveness probe
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param ingressController.livenessProbe.enabled Enable livenessProbe
  ## @param ingressController.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param ingressController.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param ingressController.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param ingressController.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param ingressController.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 120
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1
  ## Configure extra options for readiness probe
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param ingressController.readinessProbe.enabled Enable readinessProbe
  ## @param ingressController.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param ingressController.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param ingressController.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param ingressController.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param ingressController.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1
  ## Container resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
  ## resources, such as Minikube. If you do want to specify resources, uncomment the following
  ## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  ## @param ingressController.resources.limits The resources limits for the container
  ## @param ingressController.resources.requests The requested resources for the container
  ##
  resources:
    ## Example:
    limits:
       cpu: 500m
       memory: 1Gi
    # limits: {}
    ## Examples:
    requests:
       cpu: 250m
       memory: 256Mi
    # requests: {}

## @section PostgreSQL Parameters

## PostgreSQL properties
##
postgresql:
  ## @param postgresql.enabled Deploy the PostgreSQL sub-chart
  ##
  enabled: true
  volumeMounts:
    mountPath: /var/lib/postgresql/data
    name: data-my-release-postgresql-0

  image:
    registry: privateregistry.com
    repository: team1/postgres
    tag: 9.6
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
    pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## Example:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
    pullSecrets: ['test']
  ## @param postgresql.usePasswordFile Mount the PostgreSQL secret as a file
  ##

  usePasswordFile: false
  ## Properties for using an existing PostgreSQL installation
  ##
  resources:
    ## Example:
    limits:
       cpu: 500m
       memory: 1Gi
    # limits: {}
    ## Examples:
    requests:
       cpu: 100m
       memory: 128Mi
  external:
    ## @param postgresql.external.host Host of an external PostgreSQL installation
    ##
    host: ""
    ## @param postgresql.external.user Username of the external PostgreSQL installation
    ##
    user: ""
    ## @param postgresql.external.password Password of the external PostgreSQL installation
    ##
    password: ""
  ## @param postgresql.existingSecret Use an existing secret file with the PostgreSQL password (can be used with the bundled chart or with an existing installation)
  ##
  existingSecret: ""
  ## @param postgresql.postgresqlDatabase Database name to be used by Kong
  ##
  postgresqlDatabase: kong
  ## @param postgresql.postgresqlUsername Username to be created by the PostgreSQL bundled chart
  ##
  postgresqlUsername: kong

## @section Cassandra Parameters

## Cassandra properties
##
cassandra:
  ## @param cassandra.enabled Deploy the Cassandra sub-chart
  ##
  enabled: false
  ## @param cassandra.dbUser.user Username to be created by the cassandra bundled chart
  ##
  dbUser:
    user: kong
  ## @param cassandra.usePasswordFile Mount the Cassandra secret as a file
  ##
  usePasswordFile: false
  ## Properties for using an existing Cassandra installation
  ##
  external:
    ## @param cassandra.external.hosts Hosts of an external cassandra installation
    ## e.g:
    ## hosts:
    ##   - host1
    ##   - host2
    ##
    hosts: []
    ## @param cassandra.external.port Port of an external cassandra installation
    ##
    port: 9042
    ## @param cassandra.external.user Username of the external cassandra installation
    ##
    user: ""
    ## @param cassandra.external.password Password of the external cassandra installation
    ##
    password: ""
  ## @param cassandra.existingSecret Use an existing secret file with the Cassandra password (can be used with the bundled chart or with an existing installation)
  ##
  existingSecret: ""

## @section Metrics Parameters

## Prometheus metrics
##
metrics:
  ## @param metrics.enabled Enable the export of Prometheus metrics
  ##
  enabled: false
  ## Kong metrics service configuration
  ## @param metrics.service.annotations [object] Annotations for Prometheus metrics service
  ## @param metrics.service.type Type of the Prometheus metrics service
  ## @param metrics.service.port Port of the Prometheus metrics service
  ##
  service:
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "{{ .Values.metrics.service.port }}"
      prometheus.io/path: "/metrics"
    type: ClusterIP
    port: 9119
  ## Kong ServiceMonitor configuration
  ##
  serviceMonitor:
    ## @param metrics.serviceMonitor.enabled If `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`)
    ##
    enabled: false
    ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
    ## e.g:
    ## namespace: monitoring
    ##
    namespace: ""
    ## @param metrics.serviceMonitor.serviceAccount Service account used by Prometheus
    ## e.g:
    ## serviceAccount: prometheus
    ##
    serviceAccount: ""
    ## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped.
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ## e.g:
    ## interval: 10s
    ##
    interval: ""
    ## @param metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ## e.g:
    ## scrapeTimeout: 10s
    ##
    scrapeTimeout: ""
    ## @param metrics.serviceMonitor.selector Prometheus instance selector labels
    ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration
    ## e.g:
    ## selector:
    ##   prometheus: my-prometheus
    ##
    selector: {}
    ## @param metrics.serviceMonitor.rbac.enabled Whether to enable RBAC
    ## If RBAC is enabled on the cluster, additional resources will be required so Prometheus can reach kong's namespace
    ##
    rbac:
      enabled: true

What is the expected behavior?

No response

What do you see instead?

I'm seeing this error, i'm using a postgres bitnami image pulled from a private repo registry StatefulSet my-release-postgresql failed error: pods "my-release-postgresql-0" is forbidden: failed quota: ns-resource-quota: must specify limits.cpu,limits.memory

Additional information

No response

javsalgar commented 2 years ago

Hi,

It seems to me that this is not related to the Bitnami packaging, but the configuration in your cluster. The error shows that you have a ResourceQuota object in the namespace that is forcing you to define the CPU and Memory limits.

ngingihy commented 2 years ago

hey @javsalgar, does that mean i need to specify it in the values.yaml file?

javsalgar commented 2 years ago

Hi,

Exactly, you may need to tweak the limit settings inside the PostgreSQL subchart. Check the PostgreSQL chart settings

https://github.com/bitnami/charts/blob/master/bitnami/postgresql/values.yaml

For these values, you need to add postgresql., as it is part of a subchart.

github-actions[bot] commented 2 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 2 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.