vmware-tanzu / kubeapps

A web-based UI for deploying and managing applications in Kubernetes clusters
Other
4.96k stars 706 forks source link

multiple dashboard cartrige #542

Closed obeyler closed 6 years ago

obeyler commented 6 years ago

I don't understand why I've got multiple cartrige for some deployment: image


helm list
NAME                    REVISION    UPDATED                     STATUS      CHART                           NAMESPACE             
catalog                 48          Fri Aug 24 10:10:08 2018    DEPLOYED    catalog-0.1.29                  catalog               
chartmuseum             48          Fri Aug 24 10:10:34 2018    DEPLOYED    chartmuseum-1.6.0               default               
andresmgot commented 6 years ago

hi @obeyler, what you see should be the equivalent of executing helm list --all since it also show failed deployments. We have an open issue to mimic the default list behavior (https://github.com/kubeapps/kubeapps/issues/481)

You can purge your releases to remove them from there but your current state is bit weird because it should not be possible to deploy the same chart several times with the same name. Be careful if you try to purge them because it may delete your working one. In any case you can list the releases using kubectl (if you have deployed tiller with the default options):

kubectl get configmaps -n kube-system -l OWNER=TILLER

And you can safely delete there the duplicate ones.

obeyler commented 6 years ago

hi @andresmgot

 helm list --all
NAME                    REVISION    UPDATED                     STATUS      CHART                           NAMESPACE             
catalog                 49          Fri Aug 24 11:40:37 2018    DEPLOYED    catalog-0.1.29                  catalog               
chartmuseum             49          Fri Aug 24 11:41:06 2018    DEPLOYED    chartmuseum-1.6.0               default               
gogs                    49          Fri Aug 24 11:41:22 2018    DEPLOYED    gogs-0.6.0                      gogs                  
grafana                 27          Fri Aug 24 11:41:10 2018    DEPLOYED    grafana-1.14.0                  grafana               
kube-ops-view           49          Fri Aug 24 11:41:18 2018    DEPLOYED    kube-ops-view-0.4.2             default               
kubeapps                17          Fri Aug 24 11:40:58 2018    DEPLOYED    kubeapps-0.3.0                  kubeapps              
kubedb                  3           Fri Aug 24 11:40:44 2018    DEPLOYED    kubedb-0.8.0                    kubedb                
mangy-dragonfly         1           Thu Aug 23 11:25:09 2018    DELETED     ark-1.2.0                       ark                   
moldy-kitten            1           Fri Aug 24 12:03:43 2018    DEPLOYED    aerospike-0.1.7                 default               
monocular               49          Fri Aug 24 11:41:02 2018    DEPLOYED    monocular-0.6.3                 monocular             
nfs-server-provisioner  49          Fri Aug 24 11:40:34 2018    DEPLOYED    nfs-server-provisioner-0.1.5    nfs-server-provisioner
nginx-ingress           49          Fri Aug 24 11:40:29 2018    DEPLOYED    nginx-ingress-0.23.1            default               
projectriff             49          Fri Aug 24 11:40:55 2018    DEPLOYED    riff-0.0.7                      riff-system           
prometheus              46          Fri Aug 24 11:41:14 2018    DEPLOYED    prometheus-7.0.0                prometheus      

doesn't show me old deployment and when I use kubectl get configmaps -n kube-system -l OWNER=TILLER The number of catalog does'nt match with the number of cartrige

NAME                         DATA      AGE
catalog.v1                   1         10d
catalog.v10                  1         7d
catalog.v11                  1         7d
catalog.v12                  1         7d
catalog.v13                  1         7d
catalog.v14                  1         7d
catalog.v15                  1         7d
catalog.v16                  1         7d
catalog.v17                  1         7d
catalog.v18                  1         4d
catalog.v19                  1         4d
catalog.v2                   1         10d
catalog.v20                  1         4d
catalog.v21                  1         4d
catalog.v22                  1         3d
catalog.v23                  1         3d
catalog.v24                  1         3d
catalog.v25                  1         3d
catalog.v26                  1         3d
catalog.v27                  1         3d
catalog.v28                  1         3d
catalog.v29                  1         2d
catalog.v3                   1         10d
catalog.v30                  1         2d
catalog.v31                  1         2d
catalog.v32                  1         2d
catalog.v33                  1         2d
catalog.v34                  1         2d
catalog.v35                  1         1d
catalog.v36                  1         1d
catalog.v37                  1         1d
catalog.v38                  1         1d
catalog.v39                  1         1d
catalog.v4                   1         10d
catalog.v40                  1         1d
catalog.v41                  1         1d
catalog.v42                  1         1d
catalog.v43                  1         1d
catalog.v44                  1         1d
catalog.v45                  1         1d
catalog.v46                  1         1d
catalog.v47                  1         3h
catalog.v48                  1         3h
catalog.v49                  1         2h
catalog.v5                   1         9d
catalog.v6                   1         9d
catalog.v7                   1         9d
catalog.v8                   1         9d
catalog.v9                   1         8d

If the cartrige correspond to old deployment why don't you put the number of deployment ? what is the goal of showing all these old deployment ?

andresmgot commented 6 years ago

It's true that the output of helm CLI doesn't match with an expected result in Kubeapps even though we use the helm API. We'll investigate the issue. Is there an easy way we can reproduce your error?

prydonius commented 6 years ago

I've been looking into this and it looks like failed revisions of a release will show up separately in the dashboard:

screen shot 2018-08-24 at 10 29 40

screen shot 2018-08-24 at 10 30 03

screen shot 2018-08-24 at 10 30 39

In the above case, I have 5 revisions. 2 are in the failed state, 2 have been succeeded and one is deployed (as you can see from the configmap label). It looks like Helm doesn't mark failed releases that have been upgraded as succeeded (which possibly makes sense). We should look into how the Helm CLI handles this case as it seems to aggregate them.

obeyler commented 6 years ago

Feedback on my user experience : First: the revision of deployment lack on cartrige as without that it looks like a bug to see several time the same cartrige.

Second : I click on one cartrige catalog marked failed, and I ask to delete it. I do that because I thought that only the cartrige representing the failed would be deleted. In fact all catalog are deleted event the well deployed

prydonius commented 6 years ago

@obeyler thanks for the feedback, yes this is definitely a bug that we need to fix and ensure we only show one card for all revisions. The fact that you thought to delete the failed one which also deleted deployed one makes this quite a serious issue.

prydonius commented 6 years ago

Okay, I've been comparing how helm list works and how our tiller client does the list. The reason Helm list only shows one item is because it filters all the releases by the latest revision (https://github.com/helm/helm/blob/master/cmd/helm/list.go#L173). We need to implement the same filter after the ListReleases call in the Tiller Proxy: https://github.com/kubeapps/kubeapps/blob/9fd96aead90a3b6d51463234944b9a158440ae14/pkg/proxy/proxy.go#L139.

The reason you're seeing 9 cards instead of 48 is because we are ignoring releases in the superseded state, so we will only get duplicate cards for release in either of these states: https://github.com/kubeapps/kubeapps/blob/9fd96aead90a3b6d51463234944b9a158440ae14/pkg/proxy/proxy.go#L43. helm list only fetches Failed and Deployed states by default, and so we should update that list to match helm list as part of #481.

I also noticed that we're accessing the Releases structs directly instead of using the methods the Helm API provides (e.g. we are doing list.Releases instead of list.GetReleases(): https://github.com/kubeapps/kubeapps/blob/9fd96aead90a3b6d51463234944b9a158440ae14/pkg/proxy/proxy.go#L96). Although not so important, we probably want to use the functions instead of accessing the struct directly in case the Helm API does any further filtering in the future (it doesn't today).

andresmgot commented 6 years ago

Thanks for the detailed debug notes @prydonius! I'll send a patch for this now.

andresmgot commented 6 years ago

Hi @obeyler, we have merged the patch (and the issue got closed automatically). We will release a new version soon but if you want to get the patch now you can use the latest build executing:

kubectl set image -n kubeapps deployment/kubeapps-tiller-proxy proxy=kubeapps/tiller-proxy@sha256:9ffabba39e74d1f8709c16149fe42ea3a80988347932d7d1dad938a16b55efac

(the command above assumes that you have deployed the chart in the kubeapps namespace using kubeapps as name)

If you find any error please reopen this issue again.