Closed schroedt closed 3 years ago
@schroedt: This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/sig auth
Hello,
I noticed Kubernetes Documentation does not name an option for secret.
- --oidc-client-secret=***
Therefore I assume that's the reason API-Server failing starting up. Further, API Server does not come up, using:
- --oidc-groups-prefix=oidc:
- --oidc-username-prefix=oidc:
When cascading with " ", it works:
- --oidc-groups-prefix="oidc:"
- --oidc-username-prefix="oidc:"
Open Questions
- --authorization-mode=RBAC
?spec:
containers:
- command:
- kube-apiserver
- --audit-log-path=/var/log/kube-apiserver.log
- --advertise-address=192.168.190.115
- --allow-privileged=true
- --authorization-mode=RBAC
- --oidc-issuer-url=https://keycloak.example.de:30000/auth/realms/***
- --oidc-client-id=kubernetes-cluster
- --oidc-username-claim=email
- --oidc-groups-claim=groups
- --oidc-groups-prefix="oidc:"
- --oidc-username-prefix="oidc:"
- --oidc-ca-file=/usr/share/ca-certificates/extra/keycloak.crt # Not required, official Digicert Certificate
[...]
kubectl get nodes Kubernetes-Master: Wed Jan 6 13:42:10 2021
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 65d v1.20.1
kubernetes-slave NotReady <none> 65d v1.20.1
kubernetes-slave02 NotReady <none> 9d v1.20.1
kubernetes-slave03 NotReady <none> 9d v1.20.1
kubectl describe node kubernetes-master
Name: kubernetes-master
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kubernetes-master
kubernetes.io/os=linux
node-role.kubernetes.io/master=
role=storage-node
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.190.115/23
projectcalico.org/IPv4VXLANTunnelAddr: 192.168.13.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 02 Nov 2020 12:20:26 +0100
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kubernetes-master
AcquireTime: <unset>
RenewTime: Wed, 06 Jan 2021 13:32:45 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 05 Jan 2021 22:22:50 +0100 Tue, 05 Jan 2021 22:22:50 +0100 CalicoIsUp Calico is running on this node
MemoryPressure Unknown Wed, 06 Jan 2021 13:32:26 +0100 Wed, 06 Jan 2021 13:34:28 +0100 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Wed, 06 Jan 2021 13:32:26 +0100 Wed, 06 Jan 2021 13:34:28 +0100 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Wed, 06 Jan 2021 13:32:26 +0100 Wed, 06 Jan 2021 13:34:28 +0100 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Wed, 06 Jan 2021 13:32:26 +0100 Wed, 06 Jan 2021 13:34:28 +0100 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 192.168.190.115
Hostname: kubernetes-master
Capacity:
cpu: 4
ephemeral-storage: 101694448Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 6103700Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 93721603122
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 6001300Ki
pods: 110
System Info:
Machine ID: e870def1a41c4046bbe8d3372e93815e
System UUID: 96804D56-EA89-0874-CB90-2FC1A94C59C5
Boot ID: c4dd832a-4acd-489a-b6ef-e02aa1e2d96f
Kernel Version: 4.15.0-128-generic
OS Image: Ubuntu 18.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.6
Kubelet Version: v1.20.1
Kube-Proxy Version: v1.20.1
PodCIDR: 192.168.0.0/24
PodCIDRs: 192.168.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
calico-system calico-kube-controllers-7487d7f956-z4gl9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d
calico-system calico-node-8cjds 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d
calico-system calico-typha-6f95f74dbc-m8bbj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d
kube-system coredns-f9fd979d6-plxqp 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 65d
kube-system coredns-f9fd979d6-xrrtp 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 65d
kube-system etcd-kubernetes-master 100m (2%) 0 (0%) 100Mi (1%) 0 (0%) 19h
kube-system kube-controller-manager-kubernetes-master 200m (5%) 0 (0%) 0 (0%) 0 (0%) 65d
kube-system kube-proxy-5fjq4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d
kube-system kube-scheduler-kubernetes-master 100m (2%) 0 (0%) 0 (0%) 0 (0%) 65d
kubernetes-dashboard dashboard-metrics-scraper-79c5968bdc-xznsb 100m (2%) 2 (50%) 128Mi (2%) 1Gi (17%) 13d
kubernetes-dashboard kubernetes-dashboard-859bb6555c-rkclr 100m (2%) 2 (50%) 128Mi (2%) 1Gi (17%) 13d
metallb-system speaker-k49db 100m (2%) 100m (2%) 100Mi (1%) 100Mi (1%) 15d
tigera-operator tigera-operator-58f56c4958-xldbl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 900m (22%) 4100m (102%)
memory 596Mi (10%) 2488Mi (42%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 55m (x301 over 15h) kubelet Node kubernetes-master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 55m (x300 over 15h) kubelet Node kubernetes-master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 55m (x300 over 15h) kubelet Node kubernetes-master status is now: NodeHasSufficientPID
Normal NodeReady 55m kubelet Node kubernetes-master status is now: NodeReady
Without setting a context, I am able to query via kubectl:
kubectl get pods
No resources found in default namespace.
kubectl get pods -n keycloak
NAME READY STATUS RESTARTS AGE
keycloak-0 2/2 Running 5 3d2h
Is this due to kubectl config?
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.190.115:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: keycloak_user
user:
auth-provider:
config:
client-id: kubernetes-cluster
client-secret: ***
id-token: eyJhbeyJ[...]WbA
idp-issuer-url: https://keycloak.example.de/auth/realms/***
refresh-token: eyJ[...]--0
name: oidc
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
For me it looks like Kubernetes internal ClusterRoleBindings are missing after enabling RBAC:
less +F /var/logs/syslog
Jan 6 14:04:32 Kubernetes-Master kubelet[1707]: E0106 14:04:32.619510 1707 controller.go:144] failed to ensure lease exists, will retry in 7s, error: leases.coordination.k8s.io "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Jan 6 14:04:36 Kubernetes-Master kubelet[1707]: E0106 14:04:36.516445 1707 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master": nodes "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "nodes" in API group "" at the cluster scope
Jan 6 14:04:36 Kubernetes-Master kubelet[1707]: E0106 14:04:36.517566 1707 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master": nodes "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "nodes" in API group "" at the cluster scope
Jan 6 14:04:36 Kubernetes-Master kubelet[1707]: E0106 14:04:36.529853 1707 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master": nodes "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "nodes" in API group "" at the cluster scope
Jan 6 14:04:36 Kubernetes-Master kubelet[1707]: E0106 14:04:36.544937 1707 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master": nodes "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "nodes" in API group "" at the cluster scope
Jan 6 14:04:36 Kubernetes-Master kubelet[1707]: E0106 14:04:36.558420 1707 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "kubernetes-master": nodes "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "nodes" in API group "" at the cluster scope
Jan 6 14:04:36 Kubernetes-Master kubelet[1707]: E0106 14:04:36.558460 1707 kubelet_node_status.go:434] Unable to update node status: update node status exceeds retry count
Jan 6 14:04:37 Kubernetes-Master kubelet[1707]: E0106 14:04:37.081666 1707 reflector.go:138] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:kubernetes-master" cannot list resource "secrets" in API group "" in the namespace "calico-system"
Jan 6 14:04:37 Kubernetes-Master kubelet[1707]: E0106 14:04:37.588416 1707 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:node:kubernetes-master" cannot list resource "pods" in API group "" at the cluster scope
Jan 6 14:04:38 Kubernetes-Master kubelet[1707]: E0106 14:04:38.279710 1707 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-master" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
Jan 6 14:04:39 Kubernetes-Master kubelet[1707]: E0106 14:04:39.621362 1707 controller.go:144] failed to ensure lease exists, will retry in 7s, error: leases.coordination.k8s.io "kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Jan 6 14:04:40 Kubernetes-Master kubelet[1707]: E0106 14:04:40.947788 1707 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:node:kubernetes-master" cannot list resource "services" in API group "" at the cluster scope
Jan 6 14:04:41 Kubernetes-Master kubelet[1707]: W0106 14:04:41.752760 1707 status_manager.go:550] Failed to get status for pod "kube-controller-manager-kubernetes-master_kube-system(2aeea74d3d7e12e84e95fbf72747b440)": pods "kube-controller-manager-kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "pods" in API group "" in the namespace "kube-system"
Jan 6 14:04:41 Kubernetes-Master kubelet[1707]: W0106 14:04:41.753908 1707 status_manager.go:550] Failed to get status for pod "kube-scheduler-kubernetes-master_kube-system(ee4c94eb845abf1878fb3c4c489b1365)": pods "kube-scheduler-kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "pods" in API group "" in the namespace "kube-system"
Jan 6 14:04:41 Kubernetes-Master kubelet[1707]: W0106 14:04:41.755379 1707 status_manager.go:550] Failed to get status for pod "kube-apiserver-kubernetes-master_kube-system(c35be6fd03c81c0601a60e8eab0c2e9f)": pods "kube-apiserver-kubernetes-master" is forbidden: User "system:node:kubernetes-master" cannot get resource "pods" in API group "" in the namespace "kube-system"
- Can someone review, if prefixes require the " "?
If your prefixes contain a colon, you need to quote them (since the : character is significant in yaml)
Why are all nodes shifting to status "NotReady", when setting - --authorization-mode=RBAC?
By default, nodes rely on the node authorizer, so you need to include that as well (--authorization-mode=Node,RBAC
)
Hi @liggitt,
I updated the configuration --authorization-mode=Node,RBAC
and API Server comes up. I wonder if RBAC is realy working, as I can still utilize kubectl without authentication:
kubectl get pods -n keycloak
NAME READY STATUS RESTARTS AGE
keycloak-0 2/2 Running 5 3d3h
I would expect something like:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "oidc:<email_address>" cannot list pods in the namespace "default"
Is this somehow correlating to my kube config?
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.190.115:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: keycloak_user
user:
auth-provider:
config:
client-id: kubernetes-cluster
client-secret: ***
id-token: eyJhbeyJ[...]WbA
idp-issuer-url: https://keycloak.example.de/auth/realms/***
refresh-token: eyJ[...]--0
name: oidc
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
your kubeconfig is not using keycloak_user, it is using kubernetes-admin, which (if generated by kubeadm) has a client certificate with superuser authority
Hi @liggitt,
isn't the super-certificate over-written by RBAC? I will try switching the context.. Got to find out how :-)
isn't the super-certificate over-written by RBAC?
No. Authorization is additive, so the built-in superuser group received permissions regardless of RBAC enablement or configuration.
I will try switching the context.. Got to find out how :-)
Add a new context and update current-context to reference it like this:
- context:
cluster: kubernetes
user: keycloak_user
name: keycloak_user@kubernetes
current-context: keycloak_user@kubernetes
I updated the file:
vim ./.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS[...]S0tCg==
server: https://192.168.190.115:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: keycloak_user
name: keycloak_user@kubernetes
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: keycloak_user@kubernetes
kind: Config
preferences: {}
users:
- name: keycloak_user
user:
auth-provider:
config:
client-id: kubernetes-cluster
client-secret: ***
id-token: eyJhb[...]32A
idp-issuer-url: https://keycloak.example.de/auth/realms/***
refresh-token: eyJh[...]5Hs
name: oidc
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS[...]UtLS0tLQo=
client-key-data: LS0[...]tLQo=
Afterwards access is denied:
kubectl get nodes
error: You must be logged in to the server (Unauthorized)
I just updated the keycloak user with new ID-Token
and Refresh-Token
; as this was < 1 minutes, I would assume tokens are not outdated.
That means kubectl is not successfully obtaining a token the API server recognizes for authentication
Could it be, that some RoleBinding is missing? I do not see any roles or groups propagated by Keycloak. I guess a mapper is missing. Do you know how it must be provided within the ID-Token?
ID-Token
{
"exp": 1609957292,
"iat": 1609942892,
"auth_time": 0,
"jti": "ca271e11-a49f-4009-995c-c2368fa0a05e",
"iss": "https://keycloak.example.de/auth/realms/***",
"aud": "kubernetes-cluster",
"sub": "3197f2d4-45fd-4046-98ee-d0331add5862",
"typ": "ID",
"azp": "kubernetes-cluster",
"session_state": "fd428ec5-36d3-4325-98ba-01a87d6b6047",
"at_hash": "HZ7Wg3cYXwBuONacoZ_i4Q",
"acr": "1",
"email_verified": true,
"name": "Bernhard Tester",
"groups": [],
"preferred_username": "test@gmail.com",
"given_name": "Bernhard",
"locale": "de",
"family_name": "Tester",
"email": "test@gmail.com"
}
Refresh-Token
{
"exp": 1609944692,
"iat": 1609942892,
"jti": "f0a25d4a-c3f2-425f-99a1-7c0c7c03a47f",
"iss": "https://keycloak.example.de/auth/realms/***",
"aud": "https://keycloak.example.de/auth/realms/***",
"sub": "3197f2d4-45fd-4046-98ee-d0331add5862",
"typ": "Refresh",
"azp": "kubernetes-cluster",
"session_state": "fd428ec5-36d3-4325-98ba-01a87d6b6047",
"scope": "openid email profile"
}
Could it be, that some RoleBinding is missing?
No, the "You must be logged in to the server (Unauthorized)" error is an authentication error, not an RBAC error
Therefore something in Keycloak. I got an successful login at Keycloak when gatering ID-Token
and Refresh-Token
:
clear && curl -d 'client_id=kubernetes-cluster' -d 'client_secret=***' -d 'username=test@gmail.com' -d 'password=****' -d 'grant_type=password' -d 'scope=openid' 'https://keycloak.example.de/auth/realms/***/protocol/openid-connect/token'
These are shifted to Kubernetes:
kubectl config set-credentials keycloak_user \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=https://keycloak.example.de/auth/realms/*** \
--auth-provider-arg=client-id=kubernetes-cluster \
--auth-provider-arg=client-secret=*** \
--auth-provider-arg=refresh-token=eyJh[...]pKI \
--auth-provider-arg=id-token=eyJh[...]yQ
Resulting in:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: L[...]=
server: https://192.168.190.115:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
- context:
cluster: kubernetes
user: keycloak_user
name: keycloak_user@kubernetes
current-context: keycloak_user@kubernetes
kind: Config
preferences: {}
users:
- name: keycloak_user
user:
auth-provider:
config:
client-id: kubernetes-cluster
client-secret: ***
id-token: eyJh[...]CyQ
idp-issuer-url: https://keycloak.example.de/auth/realms/***
refresh-token: ey[...]NpKI
name: oidc
- name: kubernetes-admin
user:
client-certificate-data: LS0[...]o=
client-key-data: LS0[...]=
This leads to:
kubectl get pods
error: You must be logged in to the server (Unauthorized)
Am I missing something? Do I need to put tokens within " "
when adding to the config?
I scripted my login-process using CLI-Tool from Toshiaki Maki. Looks like I had an mismatching user within my kubectl conf
or time-condition for ID-Token
.
Now I get a new error, due to incorrect RoleBinding:
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "\"oidc:\"test@gmail.com" cannot list resource "nodes" in API group "" at the cluster scope
I will deploy a mapping for my Keycloak Group Platform Administrators
:
vim crb.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "oidc:/Platform Administrators"
- apiGroup: rbac.authorization.k8s.io
kind: User
name: oidc:test@gmail.com
kubectl apply -f crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/oidc-cluster-admins configured
This is ClusterRoleBinding
references:
kubectl describe clusterrole cluster-admin
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
However; I still get:
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "\"oidc:\"test@gmail.com" cannot list resource "nodes" in API group "" at the cluster scope
It looks like it correlates with my API-Server prefix. How can I add oidc:
as prefix (without " "
)?
spec:
containers:
- command:
- kube-apiserver
- --audit-log-path=/var/log/kube-apiserver.log
- --advertise-address=192.168.190.115
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --oidc-issuer-url=https://keycloak.example.de:30000/auth/realms/***
- --oidc-client-id=kubernetes-cluster
- --oidc-username-claim=email
- --oidc-groups-claim=groups
- --oidc-groups-prefix="oidc:"
- --oidc-username-prefix="oidc:"
I think I got it:
- "--oidc-groups-prefix=oidc:"
- "--oidc-username-prefix=oidc:"
Can someone confirm this notation is intended for /etc/kubernetes/manifests/kube-apiserver.yaml
?
I think I got it:
- "--oidc-groups-prefix=oidc:" - "--oidc-username-prefix=oidc:"
That is correct
Hello everyone,
I think I am facing an issue, similar to #64206. When adding OIDC (Keycloak), running inside the affected On-Prem Kubernetes cluster, I receive errors, when switching from client type
public
toconfidential
(requiring a client secert). Once re-configured, API Server refuses connections. Further, port of Master Node is not in listening mode. (unused):I faced this issue while enabling RBAC for Kiali (see Kiali Discussion #3557 for many details).
Please note, that Keycloak is running on an official DigiCert SSL Certificacte (1 year runtime). Even thoug my nodes (neither master, nor slaves) trusted DigiCert for any reasons (I added it to the Ubuntu root-ca trust). In case API-Server needs to "load" the DigiCert certificate as well, a short hint for mounting it is appreciated.
As #64206 requested a certain curl:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
can you please also check if group and user prefix is configured correctly?