vmware-tanzu / kubeapps

A web-based UI for deploying and managing applications in Kubernetes clusters
Other
4.98k stars 706 forks source link

Request to install or upgrade helm chart times out #2363

Closed absoludity closed 3 years ago

absoludity commented 3 years ago

Description:

Most requests within Kubeapps return in a reasonable time, but the ones which standout (and can cause timeouts in slower environments) are the PUT to /api/tiller-deploy/v1/clusters/tmc-cluster-1/namespaces/default/releases/<release-name> when installing (or similar when upgrading).

This is because the chart is downloaded, checks are done to ensure the user can create all the resources (which can take time) before the actual install takes place.

Steps to reproduce the issue:

  1. Choose a chart from the catalog
  2. Open the network tab of the debugging tools of the browser
  3. Click deploy and watch the timing of the request

Describe the results you received:

deploy-timing

Describe the results you expected:

Under 2s, preferably under 1s.

Additional information you deem important (e.g. issue happens only occasionally):

It may be worth investigating whether we can avoid doing the checks serially as part of this request: are they done to provide useful feedback before the attempt is made? Could we trigger this (and wait for the result) before the user clicks deploy (ie. as soon as the page to deploy something is displayed?

Or can we or otherwise do the checks async so that the request is service directly, the UI updates indicating that the Kubeapps is checking to ensure they can deploy the chart etc., before updating to do the actual install?

andresmgot commented 3 years ago

This is because the chart is downloaded, checks are done to ensure the user can create all the resources (which can take time) before the actual install takes place.

The checks are not actually done (I said that, but forgot that was just for tiller-proxy). What we do is: 1. get the apprepo 2. download the chart. 3. Get the registry secrets (if needed) 4. install the chart. I assume 2. is what's getting you the slow response but I don't think we can do much about it.

BTW, we are configuring NGINX to avoid timing out: https://github.com/kubeapps/kubeapps/blob/master/chart/kubeapps/templates/kubeapps-frontend-config.yaml#L16 I guess there is some extra jump (pinniped-proxy?) that requires the same config.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

doddatpivotal commented 3 years ago

I am facing a similar issue in regards to timeout. In my case I've integrated OIDC and Pininped with kubeapps helm version=5.3.4. I thought everything was grand because I could login changes namespaces, view catalog. But when I attempt to deploy and app, I get an error in UI stating An error occurred: upstream request timeout however if I then navigate to the applications in the namespace the deploy was sucessful. The same thing happens if I delete the deployment. When I look at the kube apps container logs I see the following associated to the deploy and the delete. All GET operations are successful.

kubeapps/kubeapps-87d48ff55-njhnj[auth-proxy]: [2021/07/17 01:52:58] [proxy.go:78] Error proxying to upstream server: context canceled
kubeapps/kubeapps-87d48ff55-njhnj[auth-proxy]: 100.96.1.22:38034 - alana@winterfell.live [2021/07/17 01:52:43] kubeapps.whiteharbor.tkg-vsphere-lab.winterfell.life POST / "/api/kubeops/v1/clusters/default/namespaces/petclinic/releases" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 502 336 14.998
kubeapps/kubeapps-87d48ff55-njhnj[auth-proxy]: [2021/07/17 01:51:45] [proxy.go:78] Error proxying to upstream server: context canceled
kubeapps/kubeapps-87d48ff55-njhnj[auth-proxy]: 100.96.1.22:38352 - alana@winterfell.live [2021/07/17 01:51:30] kubeapps.whiteharbor.tkg-vsphere-lab.winterfell.life DELETE / "/api/kubeops/v1/clusters/default/namespaces/petclinic/releases/petclinic-db?purge=true" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 502 336 15.003

Attached screenshots of the browser network inspect for the detete and the deploy.

Screen Shot 2021-07-16 at 10 15 49 PM Screen Shot 2021-07-16 at 10 16 59 PM
absoludity commented 3 years ago

I'm pretty sure that when I reported this issue, I was noticing it occasionally while using an OIDC setup with pinniped. As Andres has mentioned above, it is most likely that without pinniped-proxy, we're configured to allow for this long request, but pinniped-proxy itself is probably using whatever default the related rust lib uses (looks like 15s from both mine and Dodd's screenshots). For now, we should simply ensure that we're not timing out in pinniped-proxy.

viktor-haag commented 3 years ago

Hi all, @antgamdia forwarded me from the slack channel to this issue as it might be connected to my problem. For context, I have an OIDC setup with pinniped and keycloak which seems to be working correctly on surface level, but for some reason it is incredibly slow when it performs calls to **/api/clusters/default/***. In fact, some calls run into the default nginx timeout of 60s which breaks the UI, as you can see in the image below.

kubeapps_timeout

I tried it out with a service account / API token setup and there I did not have any problems with timeouts and the UI was quite responsive, so my best guess is that something strange is happening in pinniped or pinniped-proxy.

I also noticed that those timeouts don't appear consistantly. For instance, if directly enter https://my.host.com/api/clusters/default/api/v1/namespaces/mynamespace into the address bar it would sometimes work and sometimes run into a timeout, but unfortunately the logs don't seem to contain any meaningful information why this is happening.

absoludity commented 3 years ago

Thanks Viktor. From your requests, they don't appear to be related to install/upgrade requests necessarily (a known long request), but more generally. Let's figure out why that is. Can you list the version of pinniped that you've installed, and which Kubeapps chart version, just so we can see if we can reproduce the specific issue you're seeing (which is worse)? Also, other info that would be useful is logs of pinniped-proxy, pinniped-concierge and keycloak to dig through (if there's no security issue providing those).

viktor-haag commented 3 years ago

Hi Michael, here is the info you requested.

Kubeapps: 7.1.6 Pinniped: v0.9.2

For the logs, I performed a new login and extracted the relevant log sections (time of the access was 15:57). It basically leads to the same situation as described in my previous post. For security reasons I redacted our public facing endpoints :).

pinniped-proxy logs:

2021-07-22T15:57:30+02:00 INFO pinniped_proxy::service > GET https://kubernetes.default/ 403 Forbidden 2021-07-22T15:57:32+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:33+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:33+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:34+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:34+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:35+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:35+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:35+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:36+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:36+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:37+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:37+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:38+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:39+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:40+02:00 INFO pinniped_proxy::service > POST https://kubernetes.default/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created 2021-07-22T15:57:41+02:00 INFO pinniped_proxy::service > GET https://kubernetes.default/version?timeout=32s 200 OK 2021-07-22T15:57:42+02:00 INFO pinniped_proxy::service > GET https://kubernetes.default/api/v1/namespaces/kubeapps-test/secrets?labelSelector=owner%3Dhelm 200 OK

pinniped-concierge logs:

2021-07-22T13:57:24.411913531Z I0722 13:57:24.411370 1 certs_observer.go:69] certsObserverController Sync updated certs in the dynamic cert provider 2021-07-22T13:57:24.413314925Z I0722 13:57:24.412854 1 certs_expirer.go:88] certsExpirerController Sync found a renew delta of -214h26m15.587158104s 2021-07-22T13:57:24.413358699Z I0722 13:57:24.413172 1 certs_expirer.go:88] certsExpirerController Sync found a renew delta of -214h26m15.586836276s 2021-07-22T13:57:24.692381354Z I0722 13:57:24.691914 1 jwtcachefiller.go:115] jwtcachefiller-controller "msg"="actual jwt authenticator and desired jwt authenticator are the same" "issuer"="https://www.mykeycloak.com/auth/realms/myrealm" "jwtAuthenticator"={"name":"jwt-authenticator"} 2021-07-22T13:57:30.273703685Z I0722 13:57:30.273201 1 trace.go:205] Trace[112231146]: "create" kind: (22-Jul-2021 13:57:29.897) (total time: 375ms): 2021-07-22T13:57:30.273769945Z Trace[112231146]: ---"success" userID:,hasExtra:false,authenticated:true 375ms (13:57:00.273) 2021-07-22T13:57:30.273783076Z Trace[112231146]: [375.905738ms] [375.905738ms] END 2021-07-22T13:57:30.873578511Z I0722 13:57:30.873149 1 trace.go:205] Trace[1555952762]: "create" kind: (22-Jul-2021 13:57:30.598) (total time: 274ms): 2021-07-22T13:57:30.873633992Z Trace[1555952762]: ---"success" userID:,hasExtra:false,authenticated:true 274ms (13:57:00.873) 2021-07-22T13:57:30.873647017Z Trace[1555952762]: [274.451495ms] [274.451495ms] END 2021-07-22T13:57:32.569767306Z I0722 13:57:32.569354 1 trace.go:205] Trace[99147645]: "create" kind: (22-Jul-2021 13:57:31.478) (total time: 1091ms): 2021-07-22T13:57:32.569828426Z Trace[99147645]: ---"success" userID:,hasExtra:false,authenticated:true 1091ms (13:57:00.569) 2021-07-22T13:57:32.569841591Z Trace[99147645]: [1.091030784s] [1.091030784s] END 2021-07-22T13:57:32.570193935Z I0722 13:57:32.569990 1 trace.go:205] Trace[1387445637]: "Create" url:/apis/login.concierge.pinniped.dev/v1alpha1/tokencredentialrequests,user-agent:unknown,client:10.244.6.245,accept:,protocol:HTTP/2.0 (22-Jul-2021 13:57:31.477) (total time: 1092ms): 2021-07-22T13:57:32.570247717Z Trace[1387445637]: ---"Object stored in database" 1091ms (13:57:00.569) 2021-07-22T13:57:32.570259867Z Trace[1387445637]: [1.092072299s] [1.092072299s] END 2021-07-22T13:57:32.770943497Z I0722 13:57:32.770460 1 trace.go:205] Trace[1002785102]: "create" kind: (22-Jul-2021 13:57:31.480) (total time: 1289ms): 2021-07-22T13:57:32.770994199Z Trace[1002785102]: ---"success" userID:,hasExtra:false,authenticated:true 1289ms (13:57:00.770) 2021-07-22T13:57:32.771005838Z Trace[1002785102]: [1.289891396s] [1.289891396s] END 2021-07-22T13:57:32.967912265Z I0722 13:57:32.967346 1 trace.go:205] Trace[64617416]: "Create" url:/apis/login.concierge.pinniped.dev/v1alpha1/tokencredentialrequests,user-agent:unknown,client:10.244.6.245,accept:,protocol:HTTP/2.0 (22-Jul-2021 13:57:31.480) (total time: 1487ms): 2021-07-22T13:57:32.967955110Z Trace[64617416]: ---"Object stored in database" 1290ms (13:57:00.770) 2021-07-22T13:57:32.967966807Z Trace[64617416]: [1.487046441s] [1.487046441s] END 2021-07-22T13:57:32.970022891Z I0722 13:57:32.969568 1 trace.go:205] Trace[164058779]: "create" kind: (22-Jul-2021 13:57:31.568) (total time: 1400ms): 2021-07-22T13:57:32.970059321Z Trace[164058779]: ---"success" userID:,hasExtra:false,authenticated:true 1400ms (13:57:00.969) 2021-07-22T13:57:32.970071573Z Trace[164058779]: [1.40078897s] [1.40078897s] END 2021-07-22T13:57:32.970184102Z I0722 13:57:32.970020 1 trace.go:205] Trace[770480270]: "Create" url:/apis/login.concierge.pinniped.dev/v1alpha1/tokencredentialrequests,user-agent:unknown,client:10.244.8.30,accept:,protocol:HTTP/2.0 (22-Jul-2021 13:57:31.568) (total time: 1401ms): 2021-07-22T13:57:32.970198778Z Trace[770480270]: ---"Object stored in database" 1401ms (13:57:00.969) 2021-07-22T13:57:32.970208577Z Trace[770480270]: [1.401741626s] [1.401741626s] END 2021-07-22T13:57:33.069053472Z I0722 13:57:33.068627 1 trace.go:205] Trace[1724492117]: "create" kind: (22-Jul-2021 13:57:31.568) (total time: 1499ms): 2021-07-22T13:57:33.069095642Z Trace[1724492117]: ---"success" userID:,hasExtra:false,authenticated:true 1499ms (13:57:00.068) 2021-07-22T13:57:33.069107134Z Trace[1724492117]: [1.499642917s] [1.499642917s] END 2021-07-22T13:57:33.069315608Z I0722 13:57:33.069101 1 trace.go:205] Trace[1543272125]: "Create" url:/apis/login.concierge.pinniped.dev/v1alpha1/tokencredentialrequests,user-agent:unknown,client:10.244.8.30,accept:,protocol:HTTP/2.0 (22-Jul-2021 13:57:31.568) (total time: 1500ms): 2021-07-22T13:57:33.069337633Z Trace[1543272125]: ---"Object stored in database" 1499ms (13:57:00.068) 2021-07-22T13:57:33.069347953Z Trace[1543272125]: [1.500779742s] [1.500779742s] END 2021-07-22T13:57:33.572848072Z I0722 13:57:33.572405 1 trace.go:205] Trace[1943226351]: "create" kind: (22-Jul-2021 13:57:33.208) (total time: 363ms): 2021-07-22T13:57:33.572904850Z Trace[1943226351]: ---"success" userID:,hasExtra:false,authenticated:true 363ms (13:57:00.572) 2021-07-22T13:57:33.572942101Z Trace[1943226351]: [363.50916ms] [363.50916ms] END 2021-07-22T13:57:34.069558404Z I0722 13:57:34.069071 1 trace.go:205] Trace[1074862659]: "create" kind: (22-Jul-2021 13:57:33.692) (total time: 376ms): 2021-07-22T13:57:34.069619602Z Trace[1074862659]: ---"success" userID:,hasExtra:false,authenticated:true 376ms (13:57:00.068) 2021-07-22T13:57:34.069631186Z Trace[1074862659]: [376.822821ms] [376.822821ms] END 2021-07-22T13:57:34.769746910Z I0722 13:57:34.769248 1 trace.go:205] Trace[326046901]: "create" kind: (22-Jul-2021 13:57:34.277) (total time: 491ms): 2021-07-22T13:57:34.769792564Z Trace[326046901]: ---"success" userID:,hasExtra:false,authenticated:true 491ms (13:57:00.769) 2021-07-22T13:57:34.769804733Z Trace[326046901]: [491.3121ms] [491.3121ms] END 2021-07-22T13:57:35.567647648Z I0722 13:57:35.567211 1 trace.go:205] Trace[116786849]: "create" kind: (22-Jul-2021 13:57:34.382) (total time: 1185ms): 2021-07-22T13:57:35.567713718Z Trace[116786849]: ---"success" userID:,hasExtra:false,authenticated:true 1185ms (13:57:00.567) 2021-07-22T13:57:35.567731323Z Trace[116786849]: [1.185096775s] [1.185096775s] END 2021-07-22T13:57:35.568188317Z I0722 13:57:35.567905 1 trace.go:205] Trace[1530539446]: "Create" url:/apis/login.concierge.pinniped.dev/v1alpha1/tokencredentialrequests,user-agent:unknown,client:10.244.6.245,accept:,protocol:HTTP/2.0 (22-Jul-2021 13:57:34.381) (total time: 1186ms): 2021-07-22T13:57:35.568241993Z Trace[1530539446]: ---"Object stored in database" 1185ms (13:57:00.567) 2021-07-22T13:57:35.568253474Z Trace[1530539446]: [1.186218738s] [1.186218738s] END 2021-07-22T13:57:35.967876586Z E0722 13:57:35.967376 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.FlowSchema: failed to list v1beta1.FlowSchema: the server could not find the requested resource (get flowschemas.flowcontrol.apiserver.k8s.io) 2021-07-22T13:57:36.271338106Z I0722 13:57:36.270902 1 trace.go:205] Trace[1846488064]: "create" kind: (22-Jul-2021 13:57:35.871) (total time: 399ms): 2021-07-22T13:57:36.271387362Z Trace[1846488064]: ---"success" userID:,hasExtra:false,authenticated:true 399ms (13:57:00.270) 2021-07-22T13:57:36.271399025Z Trace[1846488064]: [399.63992ms] [399.63992ms] END 2021-07-22T13:57:36.869508556Z I0722 13:57:36.869214 1 trace.go:205] Trace[278324571]: "create" kind: (22-Jul-2021 13:57:36.497) (total time: 371ms): 2021-07-22T13:57:36.869571856Z Trace[278324571]: ---"success" userID:,hasExtra:false,authenticated:true 371ms (13:57:00.869) 2021-07-22T13:57:36.869583930Z Trace[278324571]: [371.689607ms] [371.689607ms] END 2021-07-22T13:57:37.370748668Z I0722 13:57:37.370210 1 trace.go:205] Trace[1794160871]: "create" kind: (22-Jul-2021 13:57:36.669) (total time: 700ms): 2021-07-22T13:57:37.370806360Z Trace[1794160871]: ---"success" userID:,hasExtra:false,authenticated:true 700ms (13:57:00.370) 2021-07-22T13:57:37.370819604Z Trace[1794160871]: [700.494545ms] [700.494545ms] END 2021-07-22T13:57:37.370896027Z I0722 13:57:37.370715 1 trace.go:205] Trace[389320273]: "Create" url:/apis/login.concierge.pinniped.dev/v1alpha1/tokencredentialrequests,user-agent:unknown,client:10.244.6.245,accept:,protocol:HTTP/2.0 (22-Jul-2021 13:57:36.669) (total time: 701ms): 2021-07-22T13:57:37.370941284Z Trace[389320273]: ---"Object stored in database" 700ms (13:57:00.370) 2021-07-22T13:57:37.370952674Z Trace[389320273]: [701.354376ms] [701.354376ms] END 2021-07-22T13:57:42.171451436Z I0722 13:57:42.170952 1 trace.go:205] Trace[1553220564]: "create" kind: (22-Jul-2021 13:57:41.775) (total time: 394ms): 2021-07-22T13:57:42.171506069Z Trace[1553220564]: ---"success" userID:,hasExtra:false,authenticated:true 394ms (13:57:00.170) 2021-07-22T13:57:42.171517836Z Trace[1553220564]: [394.953908ms] [394.953908ms] END 2021-07-22T13:57:50.189053787Z E0722 13:57:50.188466 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.PriorityLevelConfiguration: failed to list v1beta1.PriorityLevelConfiguration: the server could not find the requested resource (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io) 2021-07-22T13:58:27.568757289Z E0722 13:58:27.568254 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.FlowSchema: failed to list v1beta1.FlowSchema: the server could not find the requested resource (get flowschemas.flowcontrol.apiserver.k8s.io) 2021-07-22T13:58:43.623960304Z E0722 13:58:43.623506 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.PriorityLevelConfiguration: failed to list v1beta1.PriorityLevelConfiguration: the server could not find the requested resource (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io) 2021-07-22T13:59:19.333842805Z E0722 13:59:19.333154 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.PriorityLevelConfiguration: failed to list v1beta1.PriorityLevelConfiguration: the server could not find the requested resource (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io) 2021-07-22T13:59:26.105353657Z E0722 13:59:26.104881 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.FlowSchema: failed to list v1beta1.FlowSchema: the server could not find the requested resource (get flowschemas.flowcontrol.apiserver.k8s.io) 2021-07-22T14:00:01.712612894Z E0722 14:00:01.712070 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.FlowSchema: failed to list v1beta1.FlowSchema: the server could not find the requested resource (get flowschemas.flowcontrol.apiserver.k8s.io) 2021-07-22T14:00:07.826408523Z E0722 14:00:07.825858 1 reflector.go:138] /cache/gomodcache/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch v1beta1.PriorityLevelConfiguration: failed to list v1beta1.PriorityLevelConfiguration: the server could not find the requested resource (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io) 2021-07-22T14:00:24.412152124Z I0722 14:00:24.411702 1 certs_expirer.go:88] certsExpirerController Sync found a renew delta of -214h23m15.588316507s 2021-07-22T14:00:24.412269213Z I0722 14:00:24.411890 1 certs_expirer.go:88] certsExpirerController Sync found a renew delta of -214h23m15.58812515s 2021-07-22T14:00:24.412614001Z I0722 14:00:24.412415 1 certs_observer.go:69] certsObserverController Sync updated certs in the dynamic cert provider 2021-07-22T14:00:24.693250539Z I0722 14:00:24.692860 1 jwtcachefiller.go:115] jwtcachefiller-controller "msg"="actual jwt authenticator and desired jwt authenticator are the same" "issuer"="https://www.mykeycloak.com/auth/realms/myrealm" "jwtAuthenticator"={"name":"jwt-authenticator"}

keycloak logs: Nothing at the time of access.

What I noticed is that it takes about 12 seconds until the Trace thing in pinniped-concierge is finished. This roughly coincides with the time it took until I got through the "Getting the list of applications..." spinner.

Could this in any way be related to the missing FlowSchema ressources pinniped is complaining about?

absoludity commented 3 years ago

Could this in any way be related to the missing FlowSchema ressources pinniped is complaining about?

I don't think so. It looks like in your setup many requests to pinniped for a token exchange are taking between 1 and 1.5 seconds (whereas normally they are around 200-300ms, as a few of them are). Search for "total time" in your above pinniped-proxy logs and you can see.

Thanks for the version info. We'll try running with pinniped 0.9.2 and see if we can reproduce this locally. Either way, I think the issue you've found here is not related to the install/upgrade of this issue, but most likely will be related to #2875 .

absoludity commented 3 years ago

I was just doing some basic profiling of this while on something related and a huge part of the delay is actually fetching the repo index (huge for bitnami) and parsing it to get the chart url.

We should instead get the URL directly from the synced cache and fetching the chart directly.

absoludity commented 3 years ago

Duplicate of #3567

Marking as a dup of the above since Antonio is working on that one.

absoludity commented 3 years ago

Duplicate of #3567