vmware-tanzu / kubeapps

A web-based UI for deploying and managing applications in Kubernetes clusters
Other
5k stars 708 forks source link

Can not Visit Config - App Repository Site #1520

Closed MatthiasHertel closed 4 years ago

MatthiasHertel commented 4 years ago

Description:

I deployed kubeapps helmchart (helm version2) i "ingressed" the service via traefik (ingress.class=traefik). i can visit the kubeapps dashboard tls terminated by traefik - all is fine but i cannot visit the config/repos site. There is an weird react issue which i dont understand. (see screenshot). before that i could successfully deploy kubeapps with the described setting ... yesterday i purge the helm release and want redeploy but it does not work.

image

Steps to reproduce the issue:

i deployed kubeapps via helm2 with this (values.yaml see below):

λ helm upgrade --install kubeapps --namespace kubeapps -f values.yaml bitnami/kubeapps
Release "kubeapps" does not exist. Installing it now.
NAME:   kubeapps
LAST DEPLOYED: Sat Feb 15 10:21:39 2020
NAMESPACE: kubeapps
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                AGE
kubeapps-frontend-config            <invalid>
kubeapps-internal-dashboard-config  <invalid>

==> v1/Deployment
NAME                                        AGE
kubeapps                                    <invalid>
kubeapps-internal-apprepository-controller  <invalid>
kubeapps-internal-assetsvc                  <invalid>
kubeapps-internal-dashboard                 <invalid>
kubeapps-internal-tiller-proxy              <invalid>
kubeapps-mongodb                            <invalid>

==> v1/Pod(related)
NAME                                                         AGE
kubeapps-6b85c98844-44pd4                                    <invalid>
kubeapps-internal-apprepository-controller-7998749594-k94nd  <invalid>
kubeapps-internal-assetsvc-66cdbc76b4-6rczk                  <invalid>
kubeapps-internal-dashboard-dbbd957f8-wzb2p                  <invalid>
kubeapps-internal-tiller-proxy-84764cd657-rk4m8              <invalid>
kubeapps-mongodb-77b457649b-p72ts                            <invalid>

==> v1/Service
NAME                            AGE
kubeapps                        <invalid>
kubeapps-internal-assetsvc      <invalid>
kubeapps-internal-dashboard     <invalid>
kubeapps-internal-tiller-proxy  <invalid>
kubeapps-mongodb                <invalid>

==> v1/ServiceAccount
NAME                                        AGE
kubeapps-internal-apprepository-controller  <invalid>
kubeapps-internal-tiller-proxy              <invalid>

==> v1beta1/Ingress
NAME      AGE
kubeapps  <invalid>

==> v1beta1/Role
NAME                                        AGE
kubeapps-internal-apprepository-controller  <invalid>
kubeapps-internal-tiller-proxy              <invalid>
kubeapps-repositories-read                  <invalid>
kubeapps-repositories-write                 <invalid>

==> v1beta1/RoleBinding
NAME                                        AGE
kubeapps-internal-apprepository-controller  <invalid>
kubeapps-internal-tiller-proxy              <invalid>

NOTES:
** Please be patient while the chart is being deployed **

Tip:

  Watch the deployment status using the command: kubectl get pods -w --namespace kubeapps

Kubeapps can be accessed via port 80 on the following DNS name from within your cluster:

   kubeapps.kubeapps.svc.cluster.local

To access Kubeapps from outside your K8s cluster, follow the steps below:

1. Get the Kubeapps URL and associate Kubeapps hostname to your cluster external IP:

   export CLUSTER_IP=$(minikube ip) # On Minikube. Use: `kubectl cluster-info` on others K8s clusters
   echo "Kubeapps URL: http://kubeapps-db.svc.domainname.com/"
   echo "$CLUSTER_IP  kubeapps-db.svc.domainname.com" | sudo tee -a /etc/hosts

2. Open a browser and access Kubeapps using the obtained URL.

Describe the results you received:

i can visit all sites but when i visit the config app repository site i get an something went wrong message.

image

Describe the results you expected:

i expect that i can visit the config/repo site (https://kubeapps-db.svc.domainname.com/#/config/repos) without any issue.

Additional information you deem important (e.g. issue happens only occasionally):

values.yaml

 λ cat values.yaml 
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

## Enable this feature flag to use Kubeapps with Helm 3 support.
## If you set it to true, Kubeapps will not work with releases installed with Helm 2.
useHelm3: false

## The frontend service is the main reverse proxy used to access the Kubeapps UI
## To expose Kubeapps externally either configure the ingress object below or
## set frontend.service.type=LoadBalancer in the frontend configuration.
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## Set to true to enable ingress record generation
  ##
  enabled: true

  ## Set this to true in order to add the corresponding annotations for cert-manager
  ##
  certManager: false

  ## When the ingress is enabled, a host pointing to this will be created
  ##
  hostname: kubeapps-db.svc.domainname.com

  ## Enable TLS configuration for the hostname defined at ingress.hostname parameter
  ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
  ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
  ##
  tls: false

  ## Ingress annotations done as key:value pairs
  ## For a full list of possible ingress annotations,
  ## please see https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ##
  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  ##
  annotations:
    kubernetes.io/ingress.class: traefik
    ## Keep the connection open with the API server even if idle (the default is 60 seconds)
    ## Setting it to 10 minutes which should be enough for our current use case of deploying/upgrading/deleting apps
    ##
    # nginx.ingress.kubernetes.io/proxy-read-timeout: "600"

  ## The list of additional hostnames to be covered with this ingress record.
  ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
  ## extraHosts:
  ## - name: kubeapps.local
  ##   path: /

  ## The tls configuration for additional hostnames to be covered with this ingress record.
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  ## extraTls:
  ## - hosts:
  ##     - kubeapps.local
  ##   secretName: kubeapps.local-tls

  ## If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  ##
  secrets: []
  ## - name: kubeapps.local-tls
  ##   key:
  ##   certificate:

  ## DEPRECATED: to be removed on 3.0.0
  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  ##
  # hosts:
  #   - name: kubeapps.local
  #     path: /
  #     ## Set this to true in order to enable TLS on the ingress record
  #     ##
  #     tls: false
  #     ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
  #     ##
  #     tlsSecret: kubeapps.local-tls

## Frontend paramters
##
frontend:
  replicaCount: 1
  ## Bitnami Nginx image
  ## ref: https://hub.docker.com/r/bitnami/nginx/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/nginx
    tag: 1.16.1-debian-9-r52
  ## Frontend service parameters
  ##
  service:
    ## Service type
    ##
    type: ClusterIP
    ## HTTP Port
    ##
    port: 80
    ## Set a static load balancer IP (only when frontend.service.type="LoadBalancer")
    ## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
    ##
    # loadBalancerIP:
    ## Provide any additional annotations which may be required. This can be used to
    ## set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    annotations: {}

  ## NGINX containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    httpGet:
      path: /healthz
      port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
  readinessProbe:
    httpGet:
      path: /
      port: 8080
    initialDelaySeconds: 0
    timeoutSeconds: 5
  ## NGINX containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 250m
      memory: 128Mi
    requests:
      cpu: 25m
      memory: 32Mi
  ## Affinity for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Node labels for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}
  ## Use access_token as the Bearer when talking to the k8s api server
  ## Some K8s distributions such as GKE requires it
  ##
  proxypassAccessTokenAsBearer: false

## AppRepository Controller is the controller used to manage the repositories to
## sync. Set apprepository.initialRepos to configure the initial set of
## repositories to use when first installing Kubeapps.
##
apprepository:
  ## Running a single controller replica to avoid sync job duplication
  ##
  replicaCount: 1
  ## Schedule for syncing apprepositories. Every ten minutes by default
  # crontab: "*/10 * * * *"
  ## Bitnami Kubeapps AppRepository Controller image
  ## ref: https://hub.docker.com/r/bitnami/kubeapps-apprepository-controller/tags/
  ##
  image:
    registry: docker.io
    repository: kubeapps/apprepository-controller
    tag: latest
  ## Kubeapps assets synchronization tool
  ## Image used to perform chart repository syncs
  ## ref: https://hub.docker.com/r/bitnami/kubeapps-asset-syncer/tags/
  ##
  syncImage:
    registry: docker.io
    repository: kubeapps/asset-syncer
    tag: latest
  ## Initial charts repo proxies to configure
  ##
  initialReposProxy:
    enabled: false
    # http_proxy: "http://yourproxy:3128"
    # https_proxy: "http://yourproxy:3128"
    # no_proxy: "0.0.0.0/0"
  ## Initial chart repositories to configure
  ##
  initialRepos:
    - name: stable
      url: https://kubernetes-charts.storage.googleapis.com
    # - name: incubator
    #   url: https://kubernetes-charts-incubator.storage.googleapis.com
    # - name: svc-cat
    #   url: https://svc-catalog-charts.storage.googleapis.com
    # - name: bitnami
    #   url: https://charts.bitnami.com/bitnami
  # Additional repositories
  # - name: chartmuseum
  #   url: https://chartmuseum.default:8080
  #   nodeSelector:
  #     somelabel: somevalue
  #   # Specify an Authorization Header if you are using an authentication method.
  #   authorizationHeader: "Bearer xrxNC..."
  #   # If you're providing your own certificates, please use this to add the certificates as secrets.
  #   # It should start with -----BEGIN CERTIFICATE----- or
  #   # -----BEGIN RSA PRIVATE KEY-----
  #   caCert:
  # https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
  ## AppRepository Controller containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 250m
      memory: 128Mi
    requests:
      cpu: 25m
      memory: 32Mi
  ## Affinity for pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Node labels for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for pod assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

## Hooks are used to perform actions like populating apprepositories
## or creating required resources during installation or upgrade
##
hooks:
  ## Bitnami Kubectl image
  ## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/kubectl
    tag: 1.16.3-r17
  ## Affinity for hooks' pods assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Node labels for hooks' pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for hooks' pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

# Kubeops is an interface between the Kubeapps Dashboard and Helm 3/Kubernetes.
# Set useHelm3 to true to use Kubeops instead of Tiller Proxy.
kubeops:
  replicaCount: 1
  image:
    registry: docker.io
    repository: kubeapps/kubeops
    tag: latest
  service:
    port: 8080
  resources:
    limits:
      cpu: 250m
      memory: 256Mi
    requests:
      cpu: 25m
      memory: 32Mi
  ## Kubeops containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    httpGet:
      path: /live
      port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
  readinessProbe:
    httpGet:
      path: /ready
      port: 8080
    initialDelaySeconds: 0
    timeoutSeconds: 5
  nodeSelector: {}
  tolerations: []
  affinity: {}

## Tiller Proxy is a secure REST API on top of Helm's Tiller component used to
## manage Helm chart releases in the cluster from Kubeapps. Set tillerProxy.host
## to configure a different Tiller host to use.
##
tillerProxy:
  replicaCount: 1

  ## Bitnami Kubeapps Tiller Proxy image
  ## ref: https://hub.docker.com/r/bitnami/kubeapps-tiller-proxy/tags/
  ##
  image:
    registry: docker.io
    repository: kubeapps/tiller-proxy
    tag: latest

  ## Tiller Proxy service parameters
  ##
  service:
    ## HTTP Port
    ##
    port: 8080
  host: tiller-deploy.kube-system:44134

  ## TLS parameters
  ##
  tls: {}
  #  ca:
  #  cert:
  #  key:
  #  verify: false

  ## It's possible to modify the default timeout for install/upgrade/rollback/delete apps
  ## (Default: 300s)
  # timeout: 300

  ## Tiller Proxy containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 250m
      memory: 256Mi
    requests:
      cpu: 25m
      memory: 32Mi
  ## Tiller Proxy containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    httpGet:
      path: /live
      port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
  readinessProbe:
    httpGet:
      path: /ready
      port: 8080
    initialDelaySeconds: 0
    timeoutSeconds: 5

  ## Affinity for Tiller Proxy pods assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Node labels for Tiller Proxy pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for Tiller Proxy pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

## Assetsvc is used to serve assets metadata over a REST API.
##
assetsvc:
  replicaCount: 1
  ## Bitnami Kubeapps Assetsvc image
  ## ref: https://hub.docker.com/r/bitnami/kubeapps-assetsvc/tags/
  ##
  image:
    registry: docker.io
    repository: kubeapps/assetsvc
    tag: latest
  ## Assetsvc service parameters
  ##
  service:
    ## HTTP Port
    ##
    port: 8080
  ## Assetsvc containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 250m
      memory: 128Mi
    requests:
      cpu: 25m
      memory: 32Mi
  ## Assetsvc containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    httpGet:
      path: /live
      port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
  readinessProbe:
    httpGet:
      path: /ready
      port: 8080
    initialDelaySeconds: 0
    timeoutSeconds: 5
  ## Affinity for Assetsvc pods assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Node labels for Assetsvc pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for Assetsvc pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

## Dashboard serves the compiled static React frontend application. This is an
## internal service used by the main frontend reverse-proxy and should not be
## accessed directly.
##
dashboard:
  replicaCount: 1
  ## Bitnami Kubeapps Dashboard image
  ## ref: https://hub.docker.com/r/bitnami/kubeapps-dashboard/tags/
  ##
  image:
    registry: docker.io
    repository: kubeapps/dashboard
    tag: latest
  ## Dashboard service parameters
  ##
  service:
    ## HTTP Port
    ##
    port: 8080
  ## Dashboard containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    httpGet:
      path: /
      port: 8080
    initialDelaySeconds: 60
    timeoutSeconds: 5
  readinessProbe:
    httpGet:
      path: /
      port: 8080
    initialDelaySeconds: 0
    timeoutSeconds: 5
  ## Dashboard containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 250m
      memory: 128Mi
    requests:
      cpu: 25m
      memory: 32Mi
  ## Affinity for Dashboard pods assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
  ## Node labels for Dashboard pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## Tolerations for Dashboard pods assignment. Evaluated as a template.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: {}

## MongoDB chart configuration
## ref: https://github.com/helm/charts/blob/master/stable/mongodb/values.yaml
##
mongodb:
  ## Whether to deploy a mongodb server to satisfy the applications database requirements.
  enabled: true
  ## Kubeapps uses MongoDB as a cache and persistence is not required
  ##
  persistence:
    enabled: false
    size: 8Gi
  ## MongoDB credentials are handled by kubeapps to facilitate upgrades
  ##
  existingSecret: kubeapps-mongodb
  ## Pod Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: false
  ## MongoDB containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 500m
      memory: 512Mi
    requests:
      cpu: 50m
      memory: 256Mi

## PostgreSQL chart configuration
## ref: https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml
##
postgresql:
  ## Whether to deploy a postgresql server to satisfy the applications database requirements.
  enabled: false
  ## Enable replication for high availability
  replication:
    enabled: true
  ## Create a database for Kubeapps on the first run
  postgresqlDatabase: assets
  ## Kubeapps uses PostgreSQL as a cache and persistence is not required
  ##
  persistence:
    enabled: false
    size: 8Gi
  ## PostgreSQL credentials are handled by kubeapps to facilitate upgrades
  ##
  existingSecret: kubeapps-db
  ## Pod Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  ##
  securityContext:
    enabled: false
  ## PostgreSQL containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: 256Mi
      cpu: 250m

## RBAC paramters
##
rbac:
  ## Perform creation of RBAC resources
  ##
  create: true

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: false
  runAsUser: 1001
  # fsGroup: 1001

## Image used for the tests. The only requirement is to include curl
##
testImage:
  registry: docker.io
  repository: bitnami/nginx
  tag: 1.16.1-debian-9-r52

# Auth Proxy for OIDC support
# ref: https://github.com/kubeapps/kubeapps/blob/master/docs/user/using-an-OIDC-provider.md
authProxy:
  # Set to true to enable the OIDC proxy
  enabled: false
  ## Bitnami OAuth2 Proxy image
  ## ref: https://hub.docker.com/r/bitnami/oauth2-proxy/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/oauth2-proxy
    tag: 4.0.0-r92
  ## Mandatory parameters
  ##
  provider: ""
  clientID: ""
  clientSecret: ""
  ## cookieSecret is used by oauth2-proxy to encrypt any credentials so that it requires
  ## no storage. Note that it must be a particular number of bytes. Recommend using the
  ## following to generate a cookieSecret as per the oauth2 configuration documentation
  ## at https://pusher.github.io/oauth2_proxy/configuration :
  ## python -c 'import os,base64; print base64.urlsafe_b64encode(os.urandom(16))'
  cookieSecret: ""
  ## Use "example.com" to restrict logins to emails from example.com
  emailDomain: "*"
  ## Additional flags for oauth2-proxy
  ##
  additionalFlags: []
  # - -ssl-insecure-skip-verify
  # - -cookie-secure=false
  # - -scope=https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/cloud-platform
  # - -oidc-issuer-url=https://accounts.google.com # Only needed if provider is oidc
  ## OAuth2 Proxy containers' resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    ## Default values set based on usage data from running Kubeapps instances
    ## ref: https://github.com/kubeapps/kubeapps/issues/478#issuecomment-422979262
    ##
    limits:
      cpu: 250m
      memory: 128Mi
    requests:
      cpu: 25m
      memory: 32Mi
## Feature flags
## These are used to switch on in development features or new features not yet released.
featureFlags:
  reposPerNamespace: false

Version of Helm, Kubeapps and Kubernetes:

image

absoludity commented 4 years ago

Hi there @MatthiasHertel . As far as I can tell, the issue is that you're using the latest containers for all services, which are our development containers and are not compatible with the released chart. If I use the same chart that you have used, kubeapps-3.3.1 with the default values, it works as intended, because the released container images are used (rather than latest):

$ helm -n kubeapps ls
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART          APP VERSION
kubeapps        kubeapps        1               2020-02-17 01:07:36.353200498 +0000 UTC deployed        kubeapps-3.3.1 v1.8.1

$ kubectl -n kubeapps get deployments -o=jsonpath='{range .items[*].spec.template.spec.c
ontainers[*]}{@.image}{"\n"}{end}'
docker.io/bitnami/nginx:1.16.1-debian-9-r52
docker.io/bitnami/oauth2-proxy:4.0.0-r92
docker.io/bitnami/kubeapps-apprepository-controller:1.8.1-scratch-r0
docker.io/bitnami/kubeapps-assetsvc:1.8.1-scratch-r0
docker.io/bitnami/kubeapps-dashboard:1.8.1-debian-10-r0

(Note, I'm using helm3, but that's not relevant here). You'll notice that the images used by the chart by default are all the bitnami ones (which have been released with the chart, eg bitnami/kubeapps-dashboard:1.8.1-debian-10-r0 etc.), not our development ones kubeapps/kubeapps-internal-dashboard:latest which I'm guessing that you'll see based on your values.yaml.

If you redeploy after removing all the specific values that you've set above for the kubeapps images, it should just work. Let me know if that's not the case.

Also, I'm interested to know how you ended up using a values.yaml with the development version of the images?

HTH!

andresmgot commented 4 years ago

As @absoludity mention, the images under the kubeapps/ registry are not meant to be used in production. Apart from that, another thing that can go wrong is the Ingress definition. Are you using the Ingress object generated by Kubeapps or a different one? Can you post the YAML of the ingress here?

MatthiasHertel commented 4 years ago

@andresmgot @absoludity

it was exactly what u described ... the issue is solved ... it was my fail i used the values.yaml from master ...

for others this command prevent it:

helm inspect values bitnami/kubeapps > values.yaml

kind regards matthias

absoludity commented 4 years ago

Thanks for getting back. Ah, so you grabbed the values.yaml from github and modified from there.

It might be worth thinking about whether this can be improved (as it's a natural thing to do, to grab them from github). For example, if the image values in the values.yaml in master always have the latest release, while our dev setup updates to use latest or custom built images. Thoughts @andresmgot ?

andresmgot commented 4 years ago

We could have two different files values.yaml and values.dev.yaml or something like that and update the values.yaml manually when releasing a new version but that would be prompt to human errors when syncing or updating the values...

MatthiasHertel commented 4 years ago

hi there,

i guess the main question is:

should a master always be runnable or not ?

if u follow the git-flow, it should. https://nvie.com/posts/a-successful-git-branching-model/#the-main-branches

for preventing users from fetching an unstable values.yaml from git (master-branch) ... a development values.yaml should never occur in the master branch , u should develop in a develop/feature branch.

kind regards matthias

andresmgot commented 4 years ago

A small clarification, master as you say should always work but this case was a bit different.

If you use the chart in this repo with the values here (e.g. helm install ./chart/kubeapps -f ./chart/kubeapps/values.yaml) that will work. You would be using the "nightly" build though.

What it's not ensured to work is if you use the released chart (available here: https://github.com/bitnami/charts/tree/master/bitnami/kubeapps) with the values of this repo since you will be actually mixing two different charts with different templates.