kelseyhightower / kubernetes-the-hard-way

Bootstrap Kubernetes the hard way. No scripts.
Apache License 2.0
40.8k stars 13.97k forks source link

Failing - kubectl get componentstatuses --kubeconfig admin.kubeconfig #552

Open pankils opened 4 years ago

pankils commented 4 years ago

Hi When I try to get component statuses it works as listed in Verification

kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}

When I try --kubeconfig admin.kubeconfig with componentstatuses I get unauthorized error.

kubectl get componentstatuses --kubeconfig admin.kubeconfig error: You must be logged in to the server (Unauthorized)

Test the nginx HTTP health check proxy:

curl -H "Host: kubernetes.svc.cluster.local" -i http://127.0.0.1/healthz HTTP/1.1 200 OK Server: nginx/1.14.0 (Ubuntu) Date: Sun, 02 Feb 2020 11:36:43 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 2 Connection: keep-alive X-Content-Type-Options: nosniff

I am guessing I am making some mistake. Any idea/pointers to help me get this resolved will be helpful

Thanks

worp1900 commented 4 years ago

Same here:

worp@controller-0:~$ kubectl get componentstatuses --kubeconfig admin.kubeconfig
Error from server (Forbidden): componentstatuses is forbidden: User "admin" cannot list resource "componentstatuses" in API group "" at the cluster scope

Nginx und componentstatuses ohne admin kubeconfig works fine:

worp@controller-0:~$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

worp@controller-0:~$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Tue, 25 Feb 2020 13:24:22 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
X-Content-Type-Options: nosniff
ok
brommell commented 4 years ago

I have the same issue too. looks like newly created "admin" user doesn't have necessary rights.

[root@kube03 ~]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
[root@kube03 ~]# kubectl get componentstatuses --kubeconfig=admin.kubeconfig
Error from server (Forbidden): componentstatuses is forbidden: User "admin" cannot list resource "componentstatuses" in API group "" at the cluster scope
[root@kube03 ~]#

listing of admin.kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxx
    server: https://127.0.0.1:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: xxx
    client-key-data: xxx

edit 27.02.19 my fault, mistyping in "admin" certificate signing request, O = system:masters, it works now:

[root@kube01 ~]# kubectl --kubeconfig=admin.kubeconfig get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
[root@kube01 ~]#
JefferyRodJeff commented 4 years ago

I am having trouble at the same spot but not getting this error. This is what I am seeing:


The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?```
vnzongzna commented 4 years ago

I am getting a different error message as well.

error: the server doesn't have a resource type "componentstatuses"

on setting verbosity level to 10, I get:

I0708 21:54:11.500822   31545 loader.go:359] Config loaded from file:  admin.kubeconfig
I0708 21:54:11.502155   31545 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.15.3 (linux/amd64) kubernetes/2d3c76f" 'https://127.0.0.1:3389/api?timeout=32s'
I0708 21:54:11.512560   31545 round_trippers.go:438] GET https://127.0.0.1:3389/api?timeout=32s 403 Forbidden in 10 milliseconds
I0708 21:54:11.512663   31545 round_trippers.go:444] Response Headers:
I0708 21:54:11.512701   31545 round_trippers.go:447]     Content-Type: application/json
I0708 21:54:11.512781   31545 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0708 21:54:11.512826   31545 round_trippers.go:447]     Content-Length: 188
I0708 21:54:11.512859   31545 round_trippers.go:447]     Date: Wed, 08 Jul 2020 21:54:11 GMT
lanaebk commented 4 years ago

I am having trouble at the same spot but not getting this error. This is what I am seeing:

The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?```

I am getting this same error and haven't been able to successfully debug. Were you able to figure this out?

andrewlarioza commented 4 years ago
bhkamath commented 3 years ago

I am getting the same error, in my environment I observe kube-apiserver auto-restarts in a loop. It is active for couple of milli-seconds and then gets into failed state. And the process of OnError-Restart continues.The service output is reported below. All my parameters into kube-apiserver unit appears to alright.

While I inspected the journalctl -u kube-apiserver output I. However, the journalctl output reports an error - Not certain what this means.

Nov 05 22:25:52 k8s-m-1 kube-apiserver[5430]: Error: unknown api groups 'api Nov 05 22:25:52 k8s-m-1 kube-apiserver[5430]: Usage: Nov 05 22:25:52 k8s-m-1 kube-apiserver[5430]: kube-apiserver [flags]

Please note that my environment is based on CentOS-7 VM's and running over bare-metal CentOS-7 environment.

Any inputs please.

mikalai-t commented 3 years ago

@bhkamath CentOS-7 is very sensitive to types casting. Passing quoted IP address from the cmdline will lead to parsing error. Where it's possible try to remove quotes from cmdline arguments,or even better move them out to the Environment= section of systemd unit. Something like this:

...
[Service]
Environment="KUBE_AUTHORIZATION_FLAGS=--authorization-mode=Node,RBAC"
Environment="KUBE_ADMISSION_FLAGS=--enable-admission-plugins=PodSecurityPolicy"
Environment="KUBE_ENABLEMENT_FLAGS=--runtime-config=api/all=true"
# ...
ExecStart=/usr/local/bin/kube-apiserver \
    $KUBE_AUTHORIZATION_FLAGS \
    $KUBE_ADMISSION_FLAGS \
    $KUBE_ENABLEMENT_FLAGS
Type=notify
...
bhkamath commented 3 years ago

Thank you, your input on removing quotes did help overcome the stated issue. This probably is also an input to modify the statement (--runtime-config) in the create kube-apiserver systemd unit file step.

saurabh-lath commented 2 years ago
  • Had also the same error "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?```".
  • Rule of the thumb is to enable debug level for the kube-api service (-v=2) and found the error message below: "Aug 8 06:28:28 controller-1 kube-apiserver[27714]: Error: error while parsing encryption provider configuration file "/var/lib/kubernetes/encryption-config.yaml": error while parsing file: resources[0].providers[0].aescbc.keys[0].secret: Invalid value: "REDACTED": secrets must be base64 encoded".
  • After investigating the encryption-config.yaml, I found out that accidentally the scp to the controllers included the 'cat command and EOF' and second it would seems that the env setting for 'ENCRYPTION_KEY' wasn't assigned properly. So had to get the env variables value for ENCRYPTION_KEY and updated the value on the /var/lib/kubernetes/encryption-config.yaml. After restarting the kube-api service, I was able to get the component status of the controller plane.
alejandro.larioza@controller-1:~$ kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

I hope this helps..

@andrewlarioza, how do you turn the debug on for the kube-apiserver?