vladimirvivien / ktop

A top-like tool for your Kubernetes clusters
Apache License 2.0
747 stars 16 forks source link

Pods panel is empty: couldn't fetch from API #19

Closed Dentrax closed 1 year ago

Dentrax commented 2 years ago

I just tried the latest (v0.3.0) version and noticed ktop couldn't fill the pods.

Screen Shot 2022-06-14 at 17 09 08

But there are 570 pods:

$ kubectl get pods -A | wc -l

570

Any thoughts on this?

vladimirvivien commented 2 years ago

@Dentrax , well, if you made it this far (UI started), that means the account you are using has access to get namespaces, nodes and pods. There may be couple of issues:

Make sure account, for given cluster context, has sufficient rights to get/list objects namespaces, nodes, pods (metrics if present).

Hopefully that helped.

Dentrax commented 2 years ago

Actually I'm using cluster admin kubeconfig, and able to list all pods as shown in the issue.

Possibly some other access rights issue caused by object retrieved.

Can you please elaborate a bit on the meaning of other access rights?

vladimirvivien commented 2 years ago

@Dentrax if you have admin rights you should be able to see everything (pods, namespaces, nodes, etc). Other access rights include pulling (list) objects such as pvs, replicasets, jobs, etc (everything in the summary panel). Again, if you are using cluster admin, you should be able to see everything with no issue.

One quick check (if you don't mind) spin up a kind (or minikube) cluster locally and let me know if you are seeing empty panels once connected. Thanks.

7onn commented 2 years ago

Hey everyone o/ I am facing the same issue of missing listed pods.

I tested with a brand new kind cluster and it worked fine.

But when accessing my cluster endpoint which is a Teleport proxy for checking Okta's authentication, the pods are actually never listed. I am also suspecting of the payload size.

I'm running v1.25.2 and this is the cluster status:

Nodes: 28        
Namespaces: 23        
Pods: 543/630 (606 imgs)         
Deployments: 304/319         
Sets: replicas 304, daemons 169, stateful 25           
Jobs: 241 (cron: 0)         
PVs: 46 (331Gi) 
PVCs: 46 (331Gi)
vladimirvivien commented 2 years ago

Hi @7onn Apologies for the delay. Your scenario, accessing your API server endpoint behind a proxy, is something that is very specific and I have no immediate solutions or suggestions for.

Does kubectl work with your teleport proxy? If so, do you have to pass any additional params to kubectl in order to talk to the API server?

7onn commented 2 years ago

I tested with a nearly empty cluster behind teleport proxy. The pods were listed. The issue here is probably about the amount of pods.

vladimirvivien commented 2 years ago

@7onn Thank you for looking into this and confirming this may be an issue with the number of pods. The tool was not tested against clusters that large. I will look in the code to see if there is a quick workaround.

One question: do you have the metrics server installed in your large cluster ? The current implementation will use the metrics server for additional lookup for each pod it finds. I am thinking this may also adds to the issue.

Thanks.

7onn commented 1 year ago

do you have the metrics server installed in your large cluster

I do =) Pulling summarized data from metrics server sounds more efficient than kubectl get pods.

vladimirvivien commented 1 year ago

Unfortunately, pods summary, from metrics server, only includes metrics not pods states (or any other data). It is necessary to retrieve additional info from the informer. Thanks again.

shayneoneill commented 1 year ago

I should note I had a similar problem. Rebuilt the metrics server (It was basically a smoking ruin from my predecessors inexperience) and now it works great.

vladimirvivien commented 1 year ago

@shayneoneill Interesting addition to this issue. Thanks for sharing. What do you mean by rebuilt the metrics server? Do you mean rebuilt it from source ?